India antitrust probe finds Google abused Android dominance

NEW DELHI, Sept 18 (Reuters) – Google abused the dominant position of its Android operating system in India, using its “huge financial muscle” to illegally hurt competitors, the country’s antitrust authority found in a report on its two-year probe seen by Reuters.

Alphabet Inc’s (GOOGL.O) Google reduced “the ability and incentive of device manufacturers to develop and sell devices operating on alternative versions of Android,” says the June report by the Competition Commission of India’s (CCI) investigations unit.

[…]

Its findings are the latest antitrust setback for Google in India, where it faces several probes in the payments app and smart television markets. The company has been investigated in Europe, the United States and elsewhere. This week, South Korea’s antitrust regulator fined Google $180 million for blocking customised versions of Android.

‘VAGUE, BIASED AND ARBITRARY’

Google submitted at least 24 responses during the probe, defending itself and arguing it was not hurting competition, the report says.

Microsoft Corp (MSFT.O), Amazon.com Inc (AMZN.O), Apple Inc (AAPL.O), as well as smartphone makers like Samsung and Xiaomi, were among 62 entities that responded to CCI questions during its Google investigation, the report says.

Android powers 98% of India’s 520 million smartphones, according to Counterpoint Research.

When the CCI ordered the probe in 2019, it said Google appeared to have leveraged its dominance to reduce device makers’ ability to opt for alternate versions of its mobile operating system and force them to pre-install Google apps.

The 750-page report finds the mandatory pre-installation of apps “amounts to imposition of unfair condition on the device manufacturers” in violation of India’s competition law, while the company leveraged the position of its Play Store app store to protect its dominance.

Play Store policies were “one-sided, ambiguous, vague, biased and arbitrary”, while Android has been “enjoying its dominant position” in licensable operating systems for smartphones and tablets since 2011, the report says.

The probe was triggered in 2019 after two Indian junior antitrust research associates and a law student filed a complaint, Reuters reported.

[…]

Source: India antitrust probe finds Google abused Android dominance, report shows | Reuters

MoD apologises after Afghan interpreters’ personal data exposed (yes the ones still in Afghanistan)

The UK’s Ministry of Defence has launched an internal investigation after committing the classic CC-instead-of-BCC email error – but with the names and contact details of Afghan interpreters trapped in the Taliban-controlled nation.

The horrendous data breach took place yesterday, with Defence Secretary Ben Wallace promising an immediate investigation, according to the BBC.

Included in the breach were profile pictures associated with some email accounts, according to the state-owned broadcaster. The initial email was followed up by a second message urging people who had received the first one to delete it – a way of drawing close attention to an otherwise routine missive.

The email was reportedly sent by the British government’s Afghan Relocations and Assistance Policy (ARAP) unit, urging the interpreters not to put themselves or their families at risk. The ministry was said to have apologised for the “unacceptable breach.”

“This mistake could cost the life of interpreters, especially for those who are still in Afghanistan,” one source told the Beeb.

Since the US-led military coalition pulled out of Afghanistan at the end of August, there have been distressing scenes in the country as the ruling Taliban impose Islamic Sharia law – while hunting down and punishing those who helped the Western militaries. Some interpreters have reportedly been murdered, with others fearing for their lives and the well-being of their families.

[…]

Source: MoD apologises after Afghan interpreters’ data exposed • The Register

Facebook Documents Show It Fumbled the Fight Over Vaccines

he Wall Street Journal has had something of a banner week tearing down Facebook. Its series on a trove of internal company documents obtained by the paper has unveiled Facebook’s secret system for treating certain users as above the rules, company research showing how harmful Instagram is for young girls, how the site’s algorithmic solutions to toxic content have backfired, and that Facebook executives are slow to respond to reports of organized criminal activity. On Friday, it published another article detailing how badly Facebook has fumbled fighting anti-vax content and CEO Mark Zuckerberg’s campaign to get users vaccinated.

[…]

One big problem was that Facebook users were brigading any content addressing vaccination with anti-vax comments. Company researchers, according to the Journal, warned executives that comments on vaccine-related content were flooded with anti-vax propaganda, pseudo-scientific claims, and other false information and lies about the virus and the vaccines.

Global health institutions such as the World Health Organization (WHO) and Unicef had registered their concern with Facebook, with one internal company memo warning of “anti-vaccine commenters that swarm their Pages,” while another internal report in early 2021 made an initial estimate that up to 41% of comments on vaccine-related posts appeared to risk discouraging people from getting vaccinated (referred to within the company “barrier to vaccination” content). That’s out of a pool of around 775 million vaccine-related comments seen by users daily.

[…]

Facebook had promised in 2019 to crack down on antivax content and summoned WHO reps to meet with tech leaders in February 2020. Zuckerberg personally got in contact with National Institute of Allergy and Infectious Diseases director Dr. Anthony Fauci to discuss funding vaccine trials, offer ad space and user data for government-run vaccination campaigns, and arrange a live Q&A between the two on the site. Facebook had also made adjustments to its content-ranking algorithm that a June 2020 memo claimed reduced health misinformation by 6.7% to 9.9%, the Journal wrote.

But by summer 2020, BS claims about the coronavirus and vaccines were going viral on the site, including the viral “Plandemic” video, a press conference staged by a group of right-wing weirdos calling themselves “America’s Frontline Doctors,” and a handful of anti-vax accounts such as Robert F. Kennedy Jr.’s that advocacy group Avaaz later identified as responsible for a wildly disproportionate share of the offending content. According to the Journal, Facebook was well aware that the phenomenon was being driven by a relatively small but determined and prolific segment of posters and group admins:

As the rollout of the vaccine began early this year, antivaccine activists took advantage of that stance. A later analysis found that a small number of “big whales” were behind many antivaccine posts and groups on the platform. Out of nearly 150,000 posters in Facebook Groups disabled for Covid misinformation, 5% were producing half of all posts, and around 1,400 users were responsible for inviting half the groups’ new members, according to one document.

“We found, like many problems at FB, this is a head-heavy problem with a relatively few number of actors creating a large percentage of the content and growth,” Facebook researchers would write in May, likening the movement to QAnon and efforts to undermine elections.

Zuckerberg waffled and suggested that Facebook shouldn’t be in the business of censoring anti-vax posts in an interview with Axios in September 2020, saying “If someone is pointing out a case where a vaccine caused harm or that they’re worried about it —you know, that’s a difficult thing to say from my perspective that you shouldn’t be allowed to express at all.” This was a deeply incorrect assessment of the problem, as Facebook was well aware that a small group of bad actors was actively and intentionally pushing the anti-vax content.

Another internal assessment conducted earlier this year by a Facebook employee, the Journal wrote, found that two-thirds of randomly sampled comments “were anti-vax” (though the sample size was just 110 comments). In their analysis, the staffer noted one poll that showed actual anti-vaccine sentiment in the general population was 40% lower.

[…]

The Journal reported that one integrity worker flagged a post with 53,000 shares and three million views that asserted vaccines are “all experimental & you are in the experiment.” Facebook’s automated moderation tools had ignored it after somehow concluding it was written in the Romanian language. By late February, researchers came up with a hasty method to scan for “vaccine hesitant” comments, but according to the Journal their report mentioned the anti-vax comment problem was “rampant” and Facebook’s ability to fight it was “bad in English, and basically non-existent elsewhere.”

[…]

 

Source: Facebook Documents Show It Fumbled the Fight Over Vaccines

FTC releases findings on how Big Tech eats little tech in deals that fly under the radar

Federal Trade Commission chair Lina Khan signaled changes are on the way in how the agency scrutinizes acquisitions after revealing the results of a study of a decade’s worth of Big Tech company deals that weren’t reported to the agency.

Why it matters: Tech’s business ecosystem is built on giant companies buying up small startups, but the message from the antitrust agency this week could chill mergers and acquisitions in the sector.

What they found: The FTC reviewed 616 transactions valued at $1 million or more between 2010 and 2019 that were not reported to antitrust authorities by Amazon, Apple, Facebook, Google and Microsoft.

  • 94 of the transactions actually exceeded the dollar size threshold that would require companies to report a deal. The deals may have qualified for other regulatory exemptions.
  • 79% of transactions used deferred or contingent compensation to founders and key employees, and nearly 77% involved non-compete clauses.
  • 36% of the transactions involved assuming some amount of debt or liabilities.

What they’re saying: In a statement, Khan said the report shows that loopholes may be “unjustifiably enabling deals to fly under the radar.”

  • Matt Stoller, director of research at the American Economic Liberties Project, said the high percentage of non-compete clauses was especially troubling.
  • “If nothing else, it’s a clear anticompetitive intent to just take talent and prevent them from competing with you,” Stoller said. “And there is a limited amount of tech talent.”

The other side: Nothing in the report indicates that rules were broken or that the deals were anticompetitive, Neil Chilson, a former FTC adviser, pointed out.

  • “I think the message is pretty clear from the chair: She’s suspicious of mergers, no matter what the size, just based on a belief that mergers at any size are suspect and should be reviewed,” Chilson, now senior research fellow for Tech and Innovation at Stand Together, told Axios.
  • “The law certainly is not behind her on that, and I don’t think the economics are particularly there either, and nothing in the report supports that assertion.”

Source: FTC releases findings on how Big Tech eats little tech – Axios

There we go – it’s a problem I have been talking about for some time

Facebook’s 2018 Algorithm Change ‘Rewarded Outrage’. Zuck Resisted Fixes

Internal memos show how a big 2018 change rewarded outrage and that CEO Mark Zuckerberg resisted proposed fixes

In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.

He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.

Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.

BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.

Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.

Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.

“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.

“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.

Facebook employees also discussed the company’s other, less publicized motive for making the change: Users had begun to interact less with the platform, a worrisome trend, the documents show.

The email and memos are part of an extensive array of internal company communications reviewed by the Journal. They offer an unparalleled look at how much Facebook knows about the flaws in its platform and how it often lacks the will or the ability to address them. This is the third in a series of articles based on that information.

[…]

Anna Stepanov, who led a team addressing those issues, presented Mr. Zuckerberg with several proposed changes meant to address the proliferation of false and divisive content on the platform, according to an April 2020 internal memo she wrote about the briefing. One such change would have taken away a boost the algorithm gave to content most likely to be reshared by long chains of users.

“Mark doesn’t think we could go broad” with the change, she wrote to colleagues after the meeting. Mr. Zuckerberg said he was open to testing the approach, she said, but “We wouldn’t launch if there was a material tradeoff with MSI impact.”

Last month, nearly a year and a half after Ms. Stepanov said Mr. Zuckerberg nixed the idea of broadly incorporating a similar fix, Facebook announced it was “gradually expanding some tests to put less emphasis on signals such as how likely someone is to comment or share political content.” The move is part of a broader push, spurred by user surveys, to reduce the amount of political content on Facebook after the company came under criticism for the way election protesters used the platform to question the results and organize protests that led to the Jan. 6 riot at the Capitol in Washington.

[…]

“MSI ranking isn’t actually rewarding content that drives meaningful social interactions,” Mr. Peretti wrote in his email to the Facebook official, adding that his staff felt “pressure to make bad content or underperform.”

It wasn’t just material that exploited racial divisions, he wrote, but also “fad/junky science,” “extremely disturbing news” and gross images.

Political effect

In Poland, the changes made political debate on the platform nastier, Polish political parties told the company, according to the documents. The documents don’t specify which parties.

“One party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly as a function of the change to the algorithm,” wrote two Facebook researchers in an April 2019 internal report.

Nina Jankowicz, who studies social media and democracy in Central and Eastern Europe as a fellow at the Woodrow Wilson Center in Washington, said she has heard complaints from many political parties in that region that the algorithm change made direct communication with their supporters through Facebook pages more difficult. They now have an incentive, she said, to create posts that rack up comments and shares—often by tapping into anger—to get exposure in users’ feeds.

The Facebook researchers, wrote in their report that in Spain, political parties run sophisticated operations to make Facebook posts travel as far and fast as possible.

“They have learnt that harsh attacks on their opponents net the highest engagement,” they wrote. “They claim that they ‘try not to,’ but ultimately ‘you use what works.’ ”

In the 15 months following fall 2017 clashes in Spain over Catalan separatism, the percentage of insults and threats on public Facebook pages related to social and political debate in Spain increased by 43%, according to research conducted by Constella Intelligence, a Spanish digital risk protection firm.

[…]

Early tests showed how reducing that aspect of the algorithm for civic and health information helped reduce the proliferation of false content. Facebook made the change for those categories in the spring of 2020.

When Ms. Stepanov presented Mr. Zuckerberg with the integrity team’s proposal to expand that change beyond civic and health content—and a few countries such as Ethiopia and Myanmar where changes were already being made—Mr. Zuckerberg said he didn’t want to pursue it if it reduced user engagement, according to the documents.

[…]

Source: Facebook tried to make its platform a healthier place. It got angrier instead

Ig Nobel Prizes blocked by YouTube takedown over 1914 song snippet – can’t find human to fix the error

YouTube, the Ig Nobel Prizes, and the Year 1914

YouTube’s notorious takedown algorithms are blocking the video of the 2021 Ig Nobel Prize ceremony.

We have so far been unable to find a human at YouTube who can fix that. We recommend that you watch the identical recording on Vimeo.

The Fatal Song

This is a photo of John McCormack, who sang the song “Funiculi, Funicula” in the year 1914, inducing YouTube to block the 2021 Ig Nobel Prize ceremony.

Here’s what triggered this: The ceremony includes bits of a recording (of tenor John McCormack singing “Funiculi, Funicula”) made in the year 1914.

The Corporate Takedown

YouTube’s takedown algorithm claims that the following corporations all own the copyright to that audio recording that was MADE IN THE YEAR 1914: “SME, INgrooves (on behalf of Emerald); Wise Music Group, BMG Rights Management (US), LLC, UMPG Publishing, PEDL, Kobalt Music Publishing, Warner Chappell, Sony ATV Publishing, and 1 Music Rights Societies”

UPDATES: (Sept 19, 2021) There’s an ongoing discussion on Slashdot.(Sept 13, 2021) There’s an ongoing discussion on Hacker News, about this problem.

Source: Improbable Research » Blog Archive

First of all, what is copyright doing protecting anything from 1914? The creator is more than dead and buried and the model of creating once and keeping raking in money is ridiculous anyway.
Second, this shows the power the large copyright holders hold over smaller players – and the Ig Nobel Prizes aren’t exactly a small player! If a big corporation throws a DMCA at you, there’s nothing you can do – you are caught in a Kafka-esque hole with no hope in sight.

A Stanford Proposal Over AI’s ‘Foundations’ Ignites Debate

Last month, Stanford researchers declared that a new era of artificial intelligence had arrived, one built atop colossal neural networks and oceans of data. They said a new research center at Stanford would build—and study—these “foundation models” of AI.

Critics of the idea surfaced quickly—including at the workshop organized to mark the launch of the new center. Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter.

“I think the term ‘foundation’ is horribly wrong,” Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion.

Malik acknowledged that one type of model identified by the Stanford researchers—large language models that can answer questions or generate text from a prompt—has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world.

“These models are really castles in the air; they have no foundation whatsoever,” Malik said. “The language we have in these models is not grounded, there is this fakeness, there is no real understanding.” He declined an interview request.

A research paper coauthored by dozens of Stanford researchers describes “an emerging paradigm for building artificial intelligence systems” that it labeled “foundation models.” Ever-larger AI models have produced some impressive advances in AI in recent years, in areas such as perception and robotics as well as language.

Large language models are also foundational to big tech companies like Google and Facebook, which use them in areas like search, advertising, and content moderation. Building and training large language models can require millions of dollars worth of cloud computing power; so far, that’s limited their development and use to a handful of well-heeled tech companies.

But big models are problematic, too. Language models inherit bias and offensive text from the data they are trained on, and they have zero grasp of common sense or what is true or false. Given a prompt, a large language model may spit out unpleasant language or misinformation. There is also no guarantee that these large models will continue to produce advances in machine intelligence.

[…]

Dietterich wonders if the idea of foundation models isn’t partly about getting funding for the resources needed to build and work on them. “I was surprised that they gave these models a fancy name and created a center,” he says. “That does smack of flag planting, which could have several benefits on the fundraising side.”

[…]

Emily M. Bender, a professor in the linguistics department at the University of Washington, says she worries that the idea of foundation models reflects a bias toward investing in the data-centric approach to AI favored by industry.

Bender says it is especially important to study the risks posed by big AI models. She coauthored a paper, published in March, that drew attention to problems with large language models and contributed to the departure of two Google researchers. But she says scrutiny should come from multiple disciplines.

“There are all of these other adjacent, really important fields that are just starved for funding,” she says. “Before we throw money into the cloud, I would like to see money going into other disciplines.”

[…]

 

Source: A Stanford Proposal Over AI’s ‘Foundations’ Ignites Debate | WIRED

Alaska discloses ‘sophisticated’ nation-state cyberattack on health service

Alaska discloses ‘sophisticated’ nation-state cyberattack on health service

A nation-state cyber-espionage group has gained access to the IT network of the Alaska Department of Health and Social Service (DHSS), the agency said last week.

The attack, which is still being investigated, was discovered on May 2, earlier this year, by a security firm, which notified the agency.

While the DHSS made the incident public on May 18 and published two updates in June and August, the agency did not reveal any details about the intrusion until last week, when it officially dispelled the rumor that this was a ransomware attack.

Instead, the agency described the intruders as a “nation-state sponsored attacker” and “a highly sophisticated group known to conduct

complex cyberattacks against organizations that include state governments and health care entities.”

Attackers entered DHSS network via a vulnerable website

Citing an investigation conducted together with security firm Mandiant, DHSS officials said the attackers gained access to the department’s internal network through a vulnerability in one of its websites and “spread from there.”

Officials said they believe to have expelled the attacker from their network; however, there is still an investigation taking place into what the attackers might have accessed.

In a press release last week [PDF], the agency said it plans to notify all individuals who provided their personal information to the state agency.

“The breach involves an unknown number of individuals but potentially involves any data stored on the department’s information technology infrastructure at the time of the cyberattack,” officials said.

Data stored on the DHSS network, and which could have been collected by the nation-state group, includes the likes of:

  • Full names
  • Dates of birth
  • Social Security numbers
  • Addresses
  • Telephone numbers
  • Driver’s license numbers
  • Internal identifying numbers (case reports, protected service reports, Medicaid, etc.)
  • Health information
  • Financial information
  • Historical information concerning individuals’ interaction with DHSS

Notification emails will be sent to all affected individuals between September 27 and October 1, 2021, the DHSS said.

The agency has also published a FAQ page [PDF] with additional details about the nation-state attack.

“Regrettably, cyberattacks by nation-state-sponsored actors and transnational cybercriminals are becoming more common and are an inherent risk of conducting any type of business online,” said DHSS Technology Officer Scott McCutcheon.

All systems breached by the intruders remain offline. This includes systems used to perform background checks and systems used to request birth, death, and marriage certificates, all of which are now processed and reviewed manually, in person or via the phone.

Source: Alaska discloses ‘sophisticated’ nation-state cyberattack on health service – The Record by Recorded Future