Fallout continues from fake net neutrality comments

Three digital marketing firms have agreed to pay $615,000 to resolve allegations that they submitted at least 2.4 million fake public comments to influence American internet policy.

New York Attorney General Letitia James announced last week the agreement with LCX, Lead ID, and Ifficient, each of which was found to have fabricated public comments submitted in 2017 to convince the Federal Communications Commission (FCC) to repeal net neutrality.

Net neutrality refers to a policy requiring internet service providers to treat people’s internet traffic more or less equally, which some ISPs opposed because they would have preferred to act as gatekeepers in a pay-to-play regime. The neutrality rules were passed in 2015 at a time when it was feared large internet companies would eventually eradicate smaller rivals by bribing ISPs to prioritize their connections and downplay the competition.

[…]

in 2017 Ajit Pai, appointed chairman of the FCC by the Trump administration, successfully spearheaded an effort to tear up those rules and remake US net neutrality so they’d be more amenable to broadband giants. And there was a public comment period on initiative.

It was a massive sham. The Office of the Attorney General (OAG) investigation [PDF] found that 18 million of 22 million comments submitted to the FCC were fake, both for and against net neutrality.

The broadband industry’s attempt in 2017 to have the FCC repeal the net neutrality rules accounted for more than 8.5 million fake comments at a cost of $4.2 million.

“The effort was intended to create the appearance of widespread grassroots opposition to existing net neutrality rules, which — as described in an internal campaign planning document — would help provide ‘cover’ for the FCC’s proposed repeal,” the report explained.

The report also stated an unidentified 19-year-old was responsible for more than 7.7 million of 9.3 million fake comments opposing the repeal of net neutrality. These were generated using software that fabricated identities. The origin of the other 1.6 million fake comments is unknown.

LCX, Lead ID, and Ifficient were said to have taken a different approach, one that allegedly involved reuse of old consumer data from different marketing or advocacy campaigns, purchased or obtained through misrepresentation. LCX is said to have obtained some of its data from “a large data breach file found on the internet.”

[…]

This was the second such agreement for the state of New York, which two years ago got a different set of digital marketing firms – Fluent, Opt-Intelligence, and React2Media – to pay $4.4 million to disgorge funds earned for distributing about 5.4 million fake public comments related to the FCC’s net neutrality process.

[…]

astroturfing – corporate messaging masquerading as grassroots public opinion.

[…]

“no federal laws or regulations exist that limit a public relations firm’s ability to engage in astroturfing.”

[…]

Source: Fallout continues from ‘fake net neutrality comment’ claims • The Register

Ex-Ubiquiti engineer behind “breathtaking” data theft, attempts to frame co-workers, calls it a security drill, assaults stock price: 6-year prison term

An ex-Ubiquiti engineer, Nickolas Sharp, was sentenced to six years in prison yesterday after pleading guilty in a New York court to stealing tens of gigabytes of confidential data, demanding a $1.9 million ransom from his former employer, and then publishing the data publicly when his demands were refused.

[…]

In a court document, Sharp claimed that Ubiquiti CEO Robert Pera had prevented Sharp from “resolving outstanding security issues,” and Sharp told the judge that this led to an “idiotic hyperfixation” on fixing those security flaws.

However, even if that was Sharp’s true motivation, Failla did not accept his justification of his crimes, which include wire fraud, intentionally damaging protected computers, and lying to the FBI.

“It was not up to Mr. Sharp to play God in this circumstance,” Failla said.

US attorney for the Southern District of New York, Damian Williams, argued that Sharp was not a “cybersecurity vigilante” but an “inveterate liar and data thief” who was “presenting a contrived deception to the Court that this entire offense was somehow just a misguided security drill.” Williams said that Sharp made “dozens, if not hundreds, of criminal decisions” and even implicated innocent co-workers to “divert suspicion.” Sharp also had already admitted in pre-sentencing that the cyber attack was planned for “financial gain.” Williams said Sharp did it seemingly out of “pure greed” and ego because Sharp “felt mistreated”—overworked and underpaid—by the IT company, Williams said.

Court documents show that Ubiquiti spent “well over $1.5 million dollars and hundreds of hours of employee and consultant time” trying to remediate what Williams described as Sharp’s “breathtaking” theft. But the company lost much more than that when Sharp attempted to conceal his crimes—posing as a whistleblower, planting false media reports, and contacting US and foreign regulators to investigate Ubiquiti’s alleged downplaying of the data breach. Within a single day after Sharp planted false reports, stocks plummeted, causing Ubiquiti to lose over $4 billion in market capitalization value, court documents show.

[…]

In his sentencing memo, Williams said that Sharp’s characterization of the cyberattack as a security drill does not align with the timeline of events leading up to his arrest in December 2021. The timeline instead appears to reveal a calculated plan to conceal the data theft and extort nearly $2 million from Ubiquiti.

Sharp began working as a Ubiquiti senior software engineer and “Cloud Lead” in 2018, where he was paid $250,000 annually and had tasks including software development and cloud infrastructure security. About two years into the gig, Sharp purchased a VPN subscription to Surfshark in July 2020 and then seemingly began hunting for another job. By December 9, 2020, he’d lined up another job. The next day, he used his Ubiquiti security credentials to test his plan to copy data repositories while masking his IP address by using Surfshark.

Less than two weeks later, Sharp executed his plan, and he might have gotten away with it if not for a “slip-up” he never could have foreseen. While copying approximately 155 data repositories, an Internet outage temporarily disabled his VPN. When Internet service was restored, unbeknownst to Sharp, Ubiquiti logged his home IP address before the VPN tool could turn back on.

Two days later, Sharp was so bold as to ask a senior cybersecurity employee if he could be paid for submitting vulnerabilities to the company’s HackerOne bug bounty program, which seemed suspicious, court documents show. Still unaware of his slip-up, through December 26, 2020, Sharp continued to access company data using Surfshark, actively covering his trails by deleting evidence of his activity within a day and modifying evidence to make it seem like other Ubiquiti employees were using the credentials he used during the attack.

Sharp only stopped accessing the data when other employees discovered evidence of the attack on December 28, 2020. Seemingly unfazed, Sharp joined the team investigating the attack before sending his ransom email on January 7, 2021.

Ubiquiti chose not to pay the ransom and instead got the FBI involved. Soon after, Sharp’s slip-up showing his home IP put the FBI on his trail. At work, Sharp suggested his home IP was logged in an attempt to frame him, telling coworkers, “I’d be pretty fucking incompetent if I left my IP in [the] thing I requested, downloaded, and uploaded” and saying that would be the “shittiest cover up ever lol.”

While the FBI analyzed all of Sharp’s work devices, Sharp wiped and reset the laptop he used in the attack but brazenly left the laptop at home, where it was seized during a warranted FBI search in March 2021.

After the FBI search, Sharp began posing as a whistleblower, contacting journalists and regulators to falsely warn that Ubiquiti’s public disclosure and response to the cyberattack were insufficient. He said the company had deceived customers and downplayed the severity of the breach, which was actually “catastrophic.” The whole time, Williams noted in his sentencing memo, Sharp knew that the attack had been accomplished using his own employee credentials.

This was “far from a hacker targeting a vulnerability open to third parties,” Williams said. “Sharp used credentials legitimately entrusted to him by the company, to steal data and cover his tracks.”

“At every turn, Sharp acted consistent with the unwavering belief that his sophistication and cunning were sufficient to deceive others and conceal his crime,” Williams said.

[…]

Source: Ex-Ubiquiti engineer behind “breathtaking” data theft gets 6-year prison term | Ars Technica

Fake scientific papers are alarmingly common and becoming more so

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.

[…]

Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.

Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.

[…]

To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.

Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.

[…]

STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.

[…]

Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.

The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.

[…]

Source: Fake scientific papers are alarmingly common | Science | AAAS

A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.

Millions of mobile phones come pre-infected with malware

Miscreants have infected millions of Androids worldwide with malicious firmware before the devices even shipped from their factories, according to Trend Micro researchers at Black Hat Asia.

This hardware is mainly cheapo Android mobile devices, though smartwatches, TVs, and other things are caught up in it.

The gadgets have their manufacturing outsourced to an original equipment manufacturer (OEM). That outsourcing makes it possible for someone in the manufacturing pipeline – such as a firmware supplier – to infect products with malicious code as they ship out, the researchers said.

This has been going on for a while, we think; for example, we wrote about a similar headache in 2017. The Trend Micro folks characterized the threat today as “a growing problem for regular users and enterprises.” So, consider this a reminder and a heads-up all in one.

[…]

This insertion of malware began as the price of mobile phone firmware dropped, we’re told. Competition between firmware distributors became so furious that eventually the providers could not charge money for their product.

“But of course there’s no free stuff,” said Yarochkin, who explained that, as a result of this cut-throat situation, firmware started to come with an undesirable feature – silent plugins. The team analyzed dozens of firmware images looking for malicious software. They found over 80 different plugins, although many of those were not widely distributed.

The plugins that were the most impactful were those that had a business model built around them, were sold on the underground, and marketed in the open on places like Facebook, blogs, and YouTube.

The objective of the malware is to steal info or make money from information collected or delivered.

The malware turns the devices into proxies which are used to steal and sell SMS messages, take over social media and online messaging accounts, and used as monetization opportunities via adverts and click fraud.

One type of plugin, proxy plugins, allow the criminal to rent out devices for up to around five minutes at a time. For example, those renting the control of the device could acquire data on keystrokes, geographical location, IP address and more.

[…]

Through telemetry data, the researchers estimated that at least millions of infected devices exist globally, but are centralized in Southeast Asia and Eastern Europe. A statistic self-reported by the criminals themselves, said the researchers, was around 8.9 million.

As for where the threats are coming from, the duo wouldn’t say specifically, although the word “China” showed up multiple times in the presentation, including in an origin story related to the development of the dodgy firmware. Yarochkin said the audience should consider where most of the world’s OEMs are located and make their own deductions.

“Even though we possibly might know the people who build the infrastructure for this business, its difficult to pinpoint how exactly the this infection gets put into this mobile phone because we don’t know for sure at what moment it got into the supply chain,“ said Yarochkin.

The team confirmed the malware was found in the phones of at least 10 vendors, but that there was possibly around 40 more affected. For those seeking to avoid infected mobile phones, they could go some way of protecting themselves by going high end.

[…]

“Big brands like Samsung, like Google took care of their supply chain security relatively well, but for threat actors, this is still a very lucrative market,” said Yarochkin.

Source: Millions of mobile phones come pre-infected with malware • The Register

Black hat presentation: Behind the Scenes: How Criminal Enterprises Pre-infect Millions of Mobile Devices