AI’s can generate fake reviews indistinguishable from real reviews for both humans and fake review detectors

Fake reviews used to be crowdsourced. Now they can be auto-generated by AI, according to a new research paper shared by AmiMoJo:
In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors.

Humans marked these AI-generated reviews as useful at approximately the same rate as they did for real (human-authored) Yelp reviews.
Slashdot

Companies use software limitations to screw customers over more and more often, kill competition

What began with printers and spread to phones is coming to everything: this kind of technology has proliferated to smart thermostats (no apps that let you turn your AC cooler when the power company dials it up a couple degrees), tractors (no buying your parts from third-party companies), cars (no taking your GM to an independent mechanic), and many categories besides.

All these forms of cheating treat the owner of the device as an enemy of the company that made or sold it, to be thwarted, tricked, or forced into con­ducting their affairs in the best interest of the com­pany’s shareholders. To do this, they run programs and processes that attempt to hide themselves and their nature from their owners, and proxies for their owners (like reviewers and researchers).

Increasingly, cheating devices behave differ­ently depending on who is looking at them. When they believe themselves to be under close scrutiny, their behavior reverts to a more respectable, less egregious standard.
[…]
The Computer Fraud and Abuse Act (1986) makes it a crime, with jail-time, to violate a company’s terms of service. Logging into a website under a fake ID to see if it behaves differently depending on who it is talking to is thus a potential felony, provided that doing so is banned in the small-print clickthrough agreement when you sign up.

Then there’s section 1201 of the Digital Millen­nium Copyright Act (1998), which makes it a felony to bypass the software controls access to a copy­righted work. Since all software is copyrightable, and since every smart gadget contains software, this allows manufacturers to threaten jail-terms for anyone who modifies their tractors to accept third-party carburetors (just add a software-based check to ensure that the part came from John Deere and not a rival), or changes their phone to accept an independent app store, or downloads some code to let them choose generic insulin for their implanted insulin pump.

Cory Doctorow