Amazon copied products and rigged search results, documents show

Amazon.com Inc has been repeatedly accused of knocking off products it sells on its website and of exploiting its vast trove of internal data to promote its own merchandise at the expense of other sellers. The company has denied the accusations.

But thousands of pages of internal Amazon documents examined by Reuters – including emails, strategy papers and business plans – show the company ran a systematic campaign of creating knockoffs and manipulating search results to boost its own product lines in India, one of the company’s largest growth markets.

The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform. The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results so that the company’s products would appear, as one 2016 strategy report for India put it, “in the first 2 or three … search results” when customers were shopping on Amazon.in.

Among the victims of the strategy: a popular shirt brand in India, John Miller, which is owned by a company whose chief executive is Kishore Biyani, known as the country’s “retail king.” Amazon decided to “follow the measurements of” John Miller shirts down to the neck circumference and sleeve length, the document states.

[…]

Source: Amazon copied products and rigged search results, documents show

LANtenna attack reveals Ethernet cable traffic contents from a distance

An Israeli researcher has demonstrated that LAN cables’ radio frequency emissions can be read by using a $30 off-the-shelf setup, potentially opening the door to fully developed cable-sniffing attacks.

Mordechai Guri of Israel’s Ben Gurion University of the Negev described the disarmingly simple technique to The Register, which consists of putting an ordinary radio antenna up to four metres from a category 6A Ethernet cable and using an off-the-shelf software defined radio (SDR) to listen around 250MHz.

“From an engineering perspective, these cables can be used as antennas and used for RF transmission to attack the air-gap,” said Guri.

His experimental technique consisted of slowing UDP packet transmissions over the target cable to a very low speed and then transmitting single letters of the alphabet. The cable’s radiations could then be picked up by the SDR (in Guri’s case, both an R820T2-based tuner and a HackRF unit) and, via a simple algorithm, be turned back into human-readable characters.

Nicknamed LANtenna, Guri’s technique is an academic proof of concept and not a fully fledged attack that could be deployed today. Nonetheless, the research shows that poorly shielded cables have the potential to leak information which sysadmins may have believed were secure or otherwise air-gapped from the outside world.

He added that his setup’s $1 antenna was a big limiting factor and that specialised antennas could well reach “tens of metres” of range.

“We could transmit both text and binary, and also achieve faster bit-rates,” acknowledged Guri when El Reg asked about the obvious limitations described in his paper [PDF]. “However, due to environmental noises (e.g. from other cables) higher bit-rate are rather theoretical and not practical in all scenarios.”

[…]

Source: LANtenna attack reveals Ethernet cable traffic contents • The Register

Amazon accused of copying merchant products in India

When asked in July, 2020, by US Representative Pramila Jayapal (D-WA) whether Amazon ever mined data from its third-party vendors to launch competing products, founder and then CEO Jeff Bezos said he couldn’t answer “yes” or “no,” but insisted Amazon had rules disallowing the practice.

“What I can tell you is we have a policy against using seller-specific data to aid our private label business but I can’t guarantee that policy has never been violated,” Bezos said.

According to documents obtained by Reuters, Amazon’s employees in India flouted that policy by copying the products of Amazon marketplace sellers for its in-house brands and then manipulating search results on Amazon’s website to place its knockoffs at the top of search results lists.

“The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform,” said Reuters reporters Aditya Kalra and Steve Stecklow in a report published on Wednesday.

“The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results so that the company’s products would appear, as one 2016 strategy report for India put it, ‘in the first 2 or three … search results’ when customers were shopping on Amazon.in.”

Last year, the Wall Street Journal published similar allegations that the company used third-party merchant data to develop competing products, which prompted Rep. Jayapal’s question to Bezos. Such claims are central to the ongoing antitrust investigations of Amazon being conducted in the US, Europe, and India.

[…]

Source: Amazon accused of copying merchant products in India • The Register

AI Fake-Face Generators Can Be Rewound To Reveal the Real Faces They Trained On

Load up the website This Person Does Not Exist and it’ll show you a human face, near-perfect in its realism yet totally fake. Refresh and the neural network behind the site will generate another, and another, and another. The endless sequence of AI-crafted faces is produced by a generative adversarial network (GAN) — a type of AI that learns to produce realistic but fake examples of the data it is trained on. But such generated faces — which are starting to be used in CGI movies and ads — might not be as unique as they seem. In a paper titled This Person (Probably) Exists (PDF), researchers show that many faces produced by GANs bear a striking resemblance to actual people who appear in the training data. The fake faces can effectively unmask the real faces the GAN was trained on, making it possible to expose the identity of those individuals. The work is the latest in a string of studies that call into doubt the popular idea that neural networks are “black boxes” that reveal nothing about what goes on inside.

To expose the hidden training data, Ryan Webster and his colleagues at the University of Caen Normandy in France used a type of attack called a membership attack, which can be used to find out whether certain data was used to train a neural network model. These attacks typically take advantage of subtle differences between the way a model treats data it was trained on — and has thus seen thousands of times before — and unseen data. For example, a model might identify a previously unseen image accurately, but with slightly less confidence than one it was trained on. A second, attacking model can learn to spot such tells in the first model’s behavior and use them to predict when certain data, such as a photo, is in the training set or not.

Such attacks can lead to serious security leaks. For example, finding out that someone’s medical data was used to train a model associated with a disease might reveal that this person has that disease. Webster’s team extended this idea so that instead of identifying the exact photos used to train a GAN, they identified photos in the GAN’s training set that were not identical but appeared to portray the same individual — in other words, faces with the same identity. To do this, the researchers first generated faces with the GAN and then used a separate facial-recognition AI to detect whether the identity of these generated faces matched the identity of any of the faces seen in the training data. The results are striking. In many cases, the team found multiple photos of real people in the training data that appeared to match the fake faces generated by the GAN, revealing the identity of individuals the AI had been trained on.

Source: AI Fake-Face Generators Can Be Rewound To Reveal the Real Faces They Trained On – Slashdot