And so with the much-touted rise of AI and Natural Language Programming, we embarked on a series of tests…
All of the AI platforms we tested suffered from similar ailments:
In other words, AI-generated content required heavy human editing to produce anything that moderately well informed humans could consider as adding value to their minds.
We concluded that if the goal of a blog writer was just to get a paycheck in exchange for crappy rehashed copy, then AI-generated content would, with minimal edits, probably be a good way to achieve that goal with the least possible effort.
If, on the other hand, a writer’s mission was to bring readers value in the form of good information well articulated, this writer would either have to invest time in editing the AI-based content… or just use this content as a base for ideas, and then develop the article or post.
Outside of the usual ethical issues inherent to publishing content on the web, the results of our tests raised 3 main risks:
We rated Risk #1 as moderate-to-severe, depending on whether or not AI was used to create sales copy (severe risk) or blog copy (moderate risk). Even with editing, AI does not have what it takes to generate sharp sales copy. With heavy editing, it can generate decent blog posts. On a local business site, blog posts are not a determining factor in deciding visitors to call or leave. Good blog posts may contribute to inspiring trust, but the “home page + services page + about page + reviews” combination remains the reason why the phone ring. so as long as these pages were written by a well-informed human, an AI-generated and human-edited blog would likely not waste visitors.
Risk #2 was higher. Google had made clear that AI wasn’t its preferred flavor of content, and that it had the tools to detect AI-generated content. In fact, we had run the OpenAI detection tool on the content generated by AI software platforms tested, and ALL had failed the sniff test. Their content had been detected as “fake” (vs. “real human”) within a probability range of 80-99%.
So assuming Google would do a better job than OpenAI at sniffing out AI-generated content (after all, Google has a bigger dataset than OpenAI, and NLP is a Google specialty since 2012), there was no chance in hell to escape the fate reserved by Mighty G to low-quality content.
We rated Risk #3 moderate-to-high as the OpenAI industry-specific dataset is limited and if hundreds of SEOs and copywriters use the data available to get AI-written content, this content will be repetitive. This also depends on the capacity of users to work their prompts to get more precise data. The data served first seemed trite and fairly general. It would take work on the prompts to get better, more informational data.