CNET, a massively popular tech news outlet, has been quietly employing the help of “automation technology” — a stylistic euphemism for AI — on a new wave of financial explainer articles, seemingly starting around November of last year.
In the absence of any formal announcement or coverage, it appears that this was first spotted by online marketer Gael Breton in a tweet on Wednesday.
The articles are published under the unassuming appellation of “CNET Money Staff,” and encompass topics like “Should You Break an Early CD for a Better Rate?” or “What is Zelle and How Does It Work?”
That byline obviously does not paint the full picture, and so your average reader visiting the site likely would have no idea that what they’re reading is AI-generated. It’s only when you click on “CNET Money Staff,” that the actual “authorship” is revealed.
“This article was generated using automation technology,” reads a dropdown description, “and thoroughly edited and fact-checked by an editor on our editorial staff.”
Since the program began, CNET has put out around 73 AI-generated articles. That’s not a whole lot for a site that big, and absent an official announcement of the program, it appears leadership is trying to keep the experiment as lowkey as possible. CNET did not respond to questions about the AI-generated articles.
Based on Breton’s observations, though, some of the articles appear to be pulling in large amounts of traffic
But AI usage is not limited to those kinds of bottom of the barrel outlets. Even the prestigious news agency The Associated Press has been using AI since 2015 to automatically write thousands and thousands of earnings reports. The AP has even proudly proclaimed itself as “one of the first news organizations to leverage artificial intelligence.”
It’s worth noting, however, that the AP‘s auto-generated material appears to be essentially filling in blanks in predetermined formats, whereas the more sophisticated verbiage of CNET‘s publications suggests that it’s using something more akin to OpenAI’s GPT-3.
The source article is the usual fearmongering against AI and you must check / care if it was written by a human, but to me it seems that this is a good way of partnering current AI with humans to create good content.