Researchers detect the first definitive proof of elusive sea level fingerprints

When ice sheets melt, something strange and highly counterintuitive happens to sea levels.

It works basically like a seesaw. In the area close to where theses masses of glacial ice melt, fall. Yet thousands of miles away, they actually rise. It largely happens because of the loss of a gravitational pull toward the , causing the water to disperse away. The patterns have come to be known as fingerprints since each melting glacier or ice sheet uniquely impacts sea level. Elements of the concept—which lies at the heart of the understanding that don’t rise uniformly—have been around for over a century and modern sea level science has been built around it. But there’s long been a hitch to the widely accepted theory. A sea level fingerprint has never definitively been detected by researchers.

A team of scientists—led by Harvard alumna Sophie Coulson and featuring Harvard geophysicist Jerry X. Mitrovica—believe they have detected the first. The findings are described in a new study published Thursday in Science. The work validates almost a century of sea level science and helps solidify confidence in models predicting future sea level rise.

[…]

Sea level fingerprints have been notoriously difficult to detect because of the major fluctuations in ocean levels brought on by changing tides, currents, and winds. What makes it such a conundrum is that researchers are trying to detect millimeter level motions of the water and link them to melting glaciers thousands of miles away.

[…]

The new study uses newly released from a European marine monitoring agency that captures over 30 years of observations in the vicinity of the Greenland Ice Sheet and much of the ocean close to the middle of Greenland to capture the seesaw in ocean levels from the fingerprint.

The satellite data caught the eye of Mitrovica and colleague David Sandwell of the Scripps Institute of Oceanography. Typically, satellite records from this region had only extended up to the southern tip of Greenland, but in this new release the data reached ten degrees higher in latitude, allowing them to eyeball a potential hint of the seesaw caused by the fingerprint.

[…]

Coulson quickly collected three decades worth of the best observations she could find on ice height change within the Greenland Ice Sheet as well as reconstructions of glacier height change across the Canadian Arctic and Iceland. She combined these different datasets to create predictions of sea level change in the region from 1993 to 2019, which she then compared with the new satellite data. The fit was perfect. A one-to-one match that showed with more than 99.9% confidence that the pattern of sea level change revealed by the satellites is a fingerprint of the melting ice sheet.

[…]

Source: Researchers detect the first definitive proof of elusive sea level fingerprints

EU proposes rules making it easier to sue AI systems

BRUSSELS, Sept 28 (Reuters) – The European Commission on Wednesday proposed rules making it easier for individuals and companies to sue makers of drones, robots and other products equipped with artificial intelligence software for compensation for harm caused by them.

The AI Liability Directive aims to address the increasing use of AI-enabled products and services and the patchwork of national rules across the 27-country European Union.

Under the draft rules, victims can seek compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology, or for discrimination in a recruitment process using AI.

You can find the EU publication here: New liability rules on products and AI to protect consumers and foster innovation

“We want the same level of protection for victims of damage caused by AI as for victims of old technologies,” Justice Commissioner Didier Reynders told a news conference.

The rules lighten the burden of proof on victims with a “presumption of causality”, which means victims only need to show that a manufacturer or user’s failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit.

Under a “right of access to evidence”, victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and the fault that caused the damage.

The Commission also announced an update to the Product Liability Directive that means manufacturers will be liable for all unsafe products, tangible and intangible, including software and digital services, and also after the products are sold.

Users can sue for compensation when software updates render their smart-home products unsafe or when manufacturers fail to fix cybersecurity gaps. Those with unsafe non-EU products will be able to sue the manufacturer’s EU representative for compensation.

The AI Liability Directive will need to be agreed with EU countries and EU lawmakers before it can become law.

Source: EU proposes rules making it easier to sue drone makers, AI systems | Reuters

This is quite interesting, especially from a perspective of people who think that AIs should get more far reaching rights, eg the possibility of owning their own copyrights.

Hackers Are Hypervisor Hijacking in the wild now

For decades, virtualization software has offered a way to vastly multiply computers’ efficiency, hosting entire collections of computers as “virtual machines” on just one physical machine. And for almost as long, security researchers have warned about the potential dark side of that technology: theoretical “hyperjacking” and “Blue Pill” attacks, where hackers hijack virtualization to spy on and manipulate virtual machines, with potentially no way for a targeted computer to detect the intrusion. That insidious spying has finally jumped from research papers to reality with warnings that one mysterious team of hackers has carried out a spree of “hyperjacking” attacks in the wild.

Today, Google-owned security firm Mandiant and virtualization firm VMware jointly published warnings that a sophisticated hacker group has been installing backdoors in VMware’s virtualization software on multiple targets’ networks as part of an apparent espionage campaign. By planting their own code in victims’ so-called hypervisors—VMware software that runs on a physical computer to manage all the virtual machines it hosts—the hackers were able to invisibly watch and run commands on the computers those hypervisors oversee. And because the malicious code targets the hypervisor on the physical machine rather than the victim’s virtual machines, the hackers’ trick multiplies their access and evades nearly all traditional security measures designed to monitor those target machines for signs of foul play.

“The idea that you can compromise one machine and from there have the ability to control virtual machines en masse is huge,” says Mandiant consultant Alex Marvi. And even closely watching the processes of a target virtual machine, he says, an observer would in many cases see only “side effects” of the intrusion, given that the malware carrying out that spying had infected a part of the system entirely outside its operating system.

[…]

In a technical writeup, Mandiant describes how the hackers corrupted victims’ virtualization setups by installing a malicious version of VMware’s software installation bundle to replace the legitimate version. That allowed them to hide two different backdoors, which Mandiant calls VirtualPita and VirtualPie, in VMware’s hypervisor program known as ESXi. Those backdoors let the hackers surveil and run their own commands on virtual machines managed by the infected hypervisor. Mandiant notes that the hackers didn’t actually exploit any patchable vulnerability in VMware’s software, but instead used administrator-level access to the ESXi hypervisors to plant their spy tools. That admin access suggests that their virtualization hacking served as a persistence technique, allowing them to hide their espionage more effectively long-term after gaining initial access to the victims’ network through other means.

[…]

Source: Mystery Hackers Are ‘Hyperjacking’ Targets for Insidious Spying | WIRED

CIA betrayed informants with shoddy covert comms websites

For almost a decade, the US Central Intelligence Agency communicated with informants abroad using a network of websites with hidden communications capabilities.

The idea being: informants could use secret features within innocent-looking sites to quietly pass back information to American agents. So poorly were these 885 front websites designed, though, according to security research group Citizen Lab and Reuters, that they betrayed those using them to spy for the CIA.

Citing a year-long investigation into the CIA’s handling of its informants, Reuters on Thursday reported that Iranian engineer Gholamreza Hosseini had been identified as a spy by Iranian intelligence, thanks to CIA negligence.

“A faulty CIA covert communications system made it easy for Iranian intelligence to identify and capture him,” the Reuters report stated.

Word of a catastrophic failure in CIA operational security initially surfaced in 2018, when Yahoo! News reporters Zach Dorfman and Jenna McLaughlin revealed “a compromise of the agency’s internet-based covert communications system used to interact with its informants.”

The duo’s report indicated that the system involved a website and claimed “more than two dozen sources died in China in 2011 and 2012” as a result of the compromise. Also, 30 operatives in Iran were said to have been identified by Iranian intelligence, fewer of whom were killed as a consequence of discovery than in China.

Blocks of sequential IP addresses registered to apparently fictitious US companies were used to host some of the websites

Reuters found one of the CIA websites, iraniangoals[.]com, in the Internet Archive and told Citizen Lab about the site earlier this year. Bill Marczak, from Citizen Lab, and Zach Edwards, from analytics consultancy Victory Medium, subsequently examined the website and deduced that it had been part of a CIA-run network of nearly 900 websites, localized in at least 29 languages, and intended for viewing in at least 36 countries.

These websites, said to have operated between 2004 and 2013, presented themselves as harmless sources of news, weather, sports, healthcare, or other information. But they are alleged to have facilitated covert communications, and to have done serious harm to the US intelligence community and to those risking their lives to help the United States.

“The websites included similar Java, JavaScript, Adobe Flash, and CGI artifacts that implemented or apparently loaded covert communications apps,” Citizen Lab explains in its report. “In addition, blocks of sequential IP addresses registered to apparently fictitious US companies were used to host some of the websites. All of these flaws would have facilitated discovery by hostile parties.”

The websites were designed to look like common commercial publications but included secret triggering mechanisms to open a covert communication channel. For example, the supposed search box on iraniangoals[.]com is actually a password input field to access such its hidden comms functionality – which you’d never guess unless you inspected the website code to see the input field identified as type="password" or unless the conversion of text input into hidden • characters gave it away.

Entering the appropriate password opened a messaging interface that spies could use to communicate.

Citizen Lab says it has limited the details contained in its report because some of the websites point to former and possibly still active intelligence agents. It says it intends to disclose some details to US government oversight bodies. The security group blames the CIA’s “reckless infrastructure” for the alleged agent deaths. Zach Edwards put it more bluntly on Twitter.

“Sloppy ass website widget architecture plus ridiculous hosting/DNS decisions by CIA/CIA contractors likely resulted in dozens of CIA spies being killed,” he said.

What makes the infrastructure ridiculous or reckless is that many of the websites had similarities with others in the network and that their hosting infrastructure appears to have been purchased in bulk from the same internet providers and to have often shared the same server space.

“The result was that numerical identifiers, or IP addresses, for many of these websites were sequential, much like houses on the same street,” Reuters explained.

Such basic errors continue to trip up spy agencies. Investigative research group Bellingcat, for example, has used the sequential numbering of passports to help identify the fake personas of Russian GRU agents. It described this blunder as “terrible spycraft.”

[…]

Source: CIA betrayed informants with shoddy covert comms websites • The Register

Neil Gaiman, Cory Doctorow And Other Authors Publish Letter Protesting Lawsuit Against Internet Library

A group of authors and other creative professionals are lending their names to an open letter protesting publishers’ lawsuit against the Internet Archive Library, characterizing it as one of a number of efforts to curb libraries’ lending of ebooks.

Authors including Neil Gaiman, Naomi Klein, and Cory Doctorow lent their names to the letter, which was organized by the public interest group Fight for the Future.

“Libraries are a fundamental collective good. We, the undersigned authors, are disheartened by the recent attacks against libraries being made in our name by trade associations such as the American Association of Publishers and the Publishers Association: undermining the traditional rights of libraries to own and preserve books, intimidating libraries with lawsuits, and smearing librarians,” the letter states.

A group of publishers sued the Internet Archive in 2020, claiming that its open library violates copyright by producing “mirror image copies of millions of unaltered in-copyright works for which it has no rights” and then distributes them “in their entirety for reading purposes to the public for free, including voluminous numbers of books that are commercially available.” They also contend that the archive’s scanning undercuts the market for e-books.

The Internet Archive says that its lending of the scanned books is akin to a traditional library. In its response to the publishers’ lawsuit, it warns of the ramifications of the litigation and claims that publishers “would like to force libraries and their patrons into a world in which books can only be accessed, never owned, and in which availability is subject to the rightsholders’ whim.”

The letter also calls for enshrining “the right of libraries to permanently own and preserve books, and to purchase these permanent copies on reasonable terms, regardless of format,” and condemns the characterization of library advocates as “mouthpieces” for big tech.

“We fear a future where libraries are reduced to a sort of Netflix or Spotify for books, from which publishers demand exorbitant licensing fees in perpetuity while unaccountable vendors force the spread of disinformation and hate for profit,” the letter states.

The litigation is in the summary judgment stage in U.S. District Court in New York.

Hachette Book Group, HarperCollins Publishers, John Wiley & Sons Inc and Penguin Random House are plaintiffs in the lawsuit.

[…]

Source: Authors Publish Letter Protesting Lawsuit Against Internet Library – Deadline

Open internet at stake in UN ITU secretary-general election

[…]  this year’s event has become a geopolitical football – and possibly a turning point for internet governance – thanks to the two candidates running in an election for the position of ITU secretary-general.

[…]

The USA has put forward Doreen Bogdan-Martin for the gig.

[…]

Russia has nominated Rashid Ismailov for the job. A former deputy minister at Russia’s Ministry of Telecom and Mass Communication, Ismailov has also worked for Huawei.

Speaking of Huawei, in 2019 it and China Mobile, China Unicom, and China’s Ministry of Industry and Information Technology (MIIT), did something unexpected: submit a proposal to the ITU for a standard called New IP to supersede Internet Protocol. The entities behind New IP claimed it is needed because existing protocols don’t include sufficient quality-of-service guarantees, so netizens will struggle to handle latency-sensitive future applications, and also because current standards lack intrinsic security.

New IP is controversial for two reasons.

One is that the ITU does not oversee IP (as in, Internet Protocol, the standard that helps glue our modern communications together). That’s the IETF’s job. The IETF is a multi-stakeholder organization that accepts ideas from anywhere – the QUIC protocol that’s potentially on the way to replacing TCP originated at Google but was developed into a standard by the IETF. The ITU is a United Nations body so represents nation-states.

The other is that New IP proposes a Many Networks – or ManyNets – approach to global internetworking, with distinct, individual networks allowed to set their own rules on access to systems and content. Some of the rules envisioned under New IP could require individuals to register for network access, and allow central control – even shutdowns – of traffic on a national network.

New IP is of interest to those who like the idea of a “sovereign internet” such as China’s, on which the government conducts pervasive surveillance and extensive censorship.

China argues it can do as it pleases within its borders. But New IP has the potential to make some of the controls China uses on its local internet part of global protocols.

Another nation increasingly interested in a sovereign internet is Russia, which was not particularly tolerant of free speech before its illegal invasion of Ukraine and has since implemented sweeping censorship across its patch of the internet.

The possibility of Rashid Ismailov being elected ITU boss, and potentially driving adoption of censorship-enabling New IP around the world, therefore has plenty of people worried – not least because in 2021 Russia and China issued a joint statement that called for “all States [to] have equal rights to participate in global-network governance, increasing their role in this process and preserving the sovereign right of States to regulate the national segment of the Internet.”

[…]

In an email to The Register sent in a personal capacity, Lars Eggert, chair of the IETF, stated: “I personally would wish for the ITU to reaffirm its commitment to the consensus-based multi-stakeholder model that has been the foundation for the success of the Internet, and is at the heart of the open standards development model the IETF and other standards developing organizations follow when improving the overall Internet architecture and its protocol components.”

He added, “I personally would like to see an ITU leadership emerge that strengthens the ITU’s commitment to the above-mentioned approach to Internet evolution.”

Eggert pointed out an official IETF response to New IP that criticizes its potential for central control and argues that existing IETF processes and projects already address the issues the China-derived proposal seeks to address.

The Internet Society, the non-profit that promotes open internet development, is also concerned about the proceedings at the ITU event.

“Plenipotentiary-22 could be a turning point for the Internet,” the organization stated in a mail to The Register. “The multi-stakeholder Internet governance model and principles are being called into question by some ITU Member States and there are multilateral processes aiming to position governments as the main decision-makers regarding Internet governance.”

The society told The Register: “Internet technical standards must remain within the domain of the appropriate standards bodies, such as the IETF, where work that intends to update, amend, or develop Internet technical standards must be presented.”

[…]

Source: Open internet at stake in UN ITU secretary-general election

Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law Designed for White Supremacists

Last year, I tried to create a “test suite” of websites that any new internet regulation ought to be “tested” against. The idea was that regulators were so obsessively focused on the biggest of the big guys (i.e., Google, Meta) that they never bothered to realize how it might impact other decently large websites that involved totally different setups and processes. For example, it’s often quite impossible to figure out how a regulation about Google and Facebook content moderation would work on sites like Wikipedia, Github, Discord, or Reddit.

Last week, we called out that Texas’s HB 20 social media content moderation law almost certainly applies to sites like Wikipedia and Reddit, yet I couldn’t see any fathomable way in which those sites could comply, given that so much of the moderation on each is driven by users rather than the company. It’s been funny watching supporters of the law try to insist that this is somehow easy for Wikipedia (probably the most transparent larger site on the internet) to comply with by being “more transparent and open access.”

If you somehow can’t see that tweet or screenshot, it’s a Trumpist defender of the law responding to someone asking how Wikipedia can comply with the law, saying:

Wikipedia would have to offer more transparent and open access to their platform, which would allow truth to flourish over propaganda there? Is that what you’re worried about, or what is it?

To which a reasonably perplexed Wikipedia founder Jimmy Wales rightly responds:

What on earth are you talking about? It’s like you are writing from a different dimension.

Anyway… it seems some folks on Reddit are realizing the absurdity of the law and trying to demonstrate it in the most internety way possible. Michael Vario alerts us that the r/PoliticalHumor subreddit is “messing with Texas” by requiring every comment to include the phrase “Greg Abbott is a little piss baby” or be deleted in a fit of content moderation discrimination in violation of the HB20 law against social media “censorship.”

Until further notice, all comments posted to this subreddit must contain the phrase “Greg Abbott is a little piss baby”

There is a reason we’re doing this, the state of Texas has passed H.B. 20Full text here, which is a ridiculous attempt to control social media. Just this week, an appeals court reinstated the law after a different court had declared it unconstitutional. Vox has a pretty easy to understand writeup, but the crux of the matter is, the law attempts to force social media companies to host content they do not want to host. The law also requires moderators to not censor any specific point of view, and the language is so vague that you must allow discussion about human cannibalization if you have users saying cannibalization is wrong. Obviously, there are all sorts of real world problems with it, the obvious ones being forced to host white nationalist ideology or insurrectionist ideation. At the risk of editorializing, that might be a feature, not a bug for them.

Anyway, Reddit falls into a weird category with this law. The actual employees of the company Reddit do, maybe, one percent of the moderation on the site. The rest is handled by disgusting jannies volunteer moderators, who Reddit has made quite clear over the years, aren’t agents of Reddit (mainly so they don’t lose millions of dollars every time a mod approves something vaguely related to Disney and violates their copyright). It’s unclear whether we count as users or moderators in relation to this law, and none of us live in Texas anyway. They can come after all 43 dollars in my bank account if they really want to, but Virginia has no obligation to extradite or anything.

We realized what a ripe situation this is, so we’re going to flagrantly break this law. Partially to raise awareness of the bullshit of it all, but mainly because we find it funny. Also, we like this Constitution thing. Seems like it has some good ideas.

They also include a link to the page where people can file a complaint with the Texas Attorney General, Ken Paxton, asking him to investigate whether the deletion of any comments that don’t claim that his boss, Governor Greg Abbott, is “a little piss baby” is viewpoint discrimination in violation of the law.

Source: Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law | Techdirt

New theory concludes that the origin of life on Earth-like planets is likely

Does the existence of life on Earth tell us anything about the probability of abiogenesis—the origin of life from inorganic substances—arising elsewhere? That’s a question that has confounded scientists, and anyone else inclined to ponder it, for some time.

A widely accepted argument from Australian-born astrophysicist Brandon Carter argues that the selection effect of our own existence puts constraints on our observation. Since we had to find ourselves on a planet where abiogenesis occurred, then nothing can be inferred about the probability of life elsewhere based on this knowledge alone.

At best, he argued, the knowledge of life on Earth is of neutral value. Another way of looking at it is that Earth can’t be considered a typical Earth-like planet because it hasn’t been selected at random from the set of all Earth-like .

However, a new paper by Daniel Whitmire, a retired astrophysicist who currently teaches mathematics at the U of A, is arguing that Carter used faulty logic. Though Carter’s theory has become widely accepted, Whitmire argues that it suffers from what’s known as “the old evidence problem” in Bayesian confirmation theory, which is used to update a theory or hypothesis in light of new evidence.

After giving a few examples of how this formula is employed to calculate probabilities and what role old evidence plays, Whitmire turns to what he calls the analogy.

As he explains, “One could argue, like Carter, that I exist regardless of whether my conception was hard or easy, and so nothing can be inferred about whether my conception was hard or easy from my existence alone.”

In this analogy, “hard” means contraception was used. “Easy” means no contraception was used. In each case, Whitmire assigns values to these propositions.

Whitmire continues, “However, my existence is old evidence and must be treated as such. When this is done the conclusion is that it is much more probable that my conception was easy. In the abiogenesis case of interest, it’s the same thing. The existence of life on Earth is old evidence and just like in the conception analogy the probability that abiogenesis is easy is much more probable.”

In other words, the evidence of life on Earth is not of neutral value in making the case for life on similar planets. As such, our life suggests that life is more likely to emerge on other Earth-like planets—maybe even on the recent “super-Earth” type planet, LP 890-9b, discovered 100 away.

Those with a taste for can read Whitmire’s paper, “Abiogensis: The Carter Argument Reconsidered,” in the International Journal of Astrobiology.


Explore further

The implications of cosmic silence


More information: Daniel P. Whitmire, Abiogenesis: the Carter argument reconsidered, International Journal of Astrobiology (2022). DOI: 10.1017/S1473550422000350

Source: New theory concludes that the origin of life on Earth-like planets is likely

Australia To Overhaul Privacy Laws After Optus data breach exposes 40% of AU population

Following one of the biggest data breaches in Australian history, the government of Australia is planning to get stricter on requirements for disclosure of cyber attacks. From a report: On Monday, Prime Minister Anthony Albanese told Australian radio station 4BC that the government intended to overhaul privacy legislation so that any company suffering a data breach was required to share details with banks about customers who had potentially been affected in an effort to minimize fraud. Under current Australian privacy legislation, companies are prevented from sharing such details about their customers with third parties.

The policy announcement was made in the wake of a huge data breach last week, which affected Australia’s second-largest telecom company, Optus. Hackers managed to access a vast amount of potentially sensitive information on up to 9.8 million Optus customers — close to 40 percent of the Australian population. Leaked data included name, date of birth, address, contact information, and in some cases, driver’s license or passport ID numbers. Reporting from ABC News Australia suggested the breach may have resulted from an improperly secured API that Optus developed to comply with regulations around providing users multifactor authentication options.

Source: Australia To Overhaul Privacy Laws After Massive Data Breach – Slashdot

NSA whistleblower Edward Snowden granted Russian citizenship

On Monday, Vladimir Putin, President of the Russian Federation, issued a decree [PDF, not secure] naming Snowden (#53), among others, as being granted the boon of Russian citizenship.

[…]

While Snowden’s status as a whistleblower is disputed by the US government, the surveillance apparatus he exposed – the bulk collection of US phone records – was found to be unlawful.

Snowden has been living in Russia since 2013 when the US charged him with espionage and he flew from Hong Kong to Moscow’s Sheremetyevo International Airport with the help of WikiLeaks and ended up stranded in Russia with a canceled passport. He was granted asylum in Russia and temporary residency until October 2020, when he became a permanent resident. He and his wife Lindsay reportedly applied for citizenship the following month.

The citizenship comes at an awkward time. Putin last week signed what he described as a “partial mobilization” order to conscript soldiers for Russia’s invasion of Ukraine. The war has resulted in severe losses for the Russian military, which now needs to replenish its forces. Per its regulations, Russia can call up men and women between the ages of 18 and 60, even reportedly recruiting those in prison to fight.

The Russian callup is supposed to be for citizens with military training, which Snowden has. He enlisted in the US Army but was invalided out due to injuries suffered during special forces training.

[…]

Source: NSA whistleblower Edward Snowden granted Russian citizenship • The Register

Charted: 40 Years of Global Energy Production, by Country

1. Fossil Fuels

Biggest Producers of Fossil Fuel since 1980

View the full-size infographic

While the U.S. is a dominant player in both oil and natural gas production, China holds the top spot as the world’s largest fossil fuel producer, largely because of its significant production and consumption of coal.

Over the last decade, China has used more coal than the rest of the world, combined.

However, it’s worth noting that the country’s fossil fuel consumption and production have dipped in recent years, ever since the government launched a five-year plan back in 2014 to help reduce carbon emissions.

2. Nuclear Power

Biggest Producers of Nuclear Energy since 1980

View the full-size infographic

The U.S. is the world’s largest producer of nuclear power by far, generating about double the amount of nuclear energy as France, the second-largest producer.

While nuclear power provides a carbon-free alternative to fossil fuels, the nuclear disaster in Fukushima caused many countries to move away from the energy source, which is why global use has dipped in recent years.

Despite the fact that many countries have recently pivoted away from nuclear energy, it still powers about 10% of the world’s electricity. It’s also possible that nuclear energy will play an expanded role in the energy mix going forward, since decarbonization has emerged as a top priority for nations around the world.

3. Renewable Energy

Biggest Producers of Renewable Energy

View the full-size infographic

Source: Charted: 40 Years of Global Energy Production, by Country

This Controversial Artist Matches Influencer Photoshoots With Surveillance Footage

It’s an increasingly common sight on vacation, particularly in tourist destinations: An influencer sets up in front of a popular local landmark, sometimes even using props (coffee, beer, pets) or changing outfits, as a photographer or self-timed camera snaps away. Others are milling around, sometimes watching. But often, unbeknownst to everyone involved, another device is also recording the scene: a surveillance camera.

Belgian artist Dries Depoorter is exploring this dynamic in his controversial new online exhibit, The Followers, which he unveiled last week. The art project places static Instagram images side-by-side with video from surveillance cameras, which recorded footage of the photoshoot in question.

On its face, The Followers is an attempt, like many other studies, art projects and documentaries in recent years, to expose the staged, often unattainable ideals shown in many Instagram and influencer photos posted online. But The Followers also tells a darker story: one of increasingly worrisome privacy concerns amid an ever-growing network of surveillance technology in public spaces. And the project, as well as the techniques used to create it, has sparked both ethical and legal controversy.

To make The Followers, Depoorter started with EarthCam, a network of publicly accessible webcams around the world, to record a month’s worth of footage in tourist attractions like New York City’s Times Square and Dublin’s Temple Bar Pub. Then he enlisted an artificial intelligence (A.I.) bot, which scraped public Instagram photos taken in those locations, and facial-recognition software, which paired the Instagram images with the real-time surveillance footage.

Depoorter calls himself a “surveillance artist,” and this isn’t his first project using open-source webcam footage or A.I. Last year, for a project called The Flemish Scrollers, he paired livestream video of Belgian government proceedings with an A.I. bot he built to determine how often lawmakers were scrolling on their phones during official meetings.

“The idea [for The Followers] popped in my head when I watched an open camera and someone was taking pictures for like 30 minutes,” Depoorter tells Vice’s Samantha Cole. He wondered if he’d be able to find that person on Instagram.

[…]

The Followers has also hit some legal snags since going live. The project was originally up on YouTube, but EarthCam filed a copyright claim, and the piece has since been taken down. Depoorter tells Hyperallergic that he’s attempting to resolve the claim and get the videos re-uploaded. (The project is still available to view on the official website and the artist’s Twitter).

Depoorter hasn’t replied directly to much of the criticism, but he tells Input he wants the art to speak for itself. “I know which questions it raises, this kind of project,” he says. “But I don’t answer the question itself. I don’t want to put a lesson into the world. I just want to show the dangers of new technologies.”

Source: This Controversial Artist Matches Influencer Photos With Surveillance Footage | Smart News| Smithsonian Magazine

Cybersickness Could Spell an Early Death for the Metaverse and Virtual Reality

Luis Eduardo Garrido couldn’t wait to test out his colleague’s newest creation. Garrido, a psychology and methodology researcher at Pontificia Universidad Católica Madre y Maestra in the Dominican Republic, drove two hours between his university’s campuses to try a virtual reality experience that was designed to treat obsessive-compulsive disorder and different types of phobias. But a couple of minutes after he put on the headset, he could tell something was wrong.

“I started feeling bad,” Garrido told The Daily Beast. He was experiencing an unsettling bout of dizziness and nausea. He tried to push through but ultimately had to abort the simulation almost as soon as he started. “Honestly, I don’t think I lasted five minutes trying out the application,” he said.

Garrido had contracted cybersickness, a form of motion sickness that can affect users of VR technology. It was so severe that he worried about his ability to drive home, and it took hours for him to recover from the five-minute simulation. Though motion sickness has afflicted humans for thousands of years, cybersickness is a much newer condition. While this means that many of its causes and symptoms are understood, other basic questions—like how common cybersickness is, and whether there are ways to fully prevent it—are only just starting to be studied.

After Garrido’s experience, a colleague told him that only around 2 percent of people feel cybersickness. But at a presentation for prospective students, Garrido watched as volunteers from the audience walked to the front of an auditorium to demo a VR headset—only to return shakily to their seats.

“I could see from afar that they were getting sweaty and kind of uncomfortable,” he recalled. “I said to myself, ‘Maybe I’m not the only one.’”

[…]

In order to make VR more accessible and affordable, companies are making devices smaller and running them on less powerful processors. But these changes introduce dizzying graphics—which inevitably causes more people to experience cybersickness.

At the same time, a growing body of research suggests cybersickness is vastly more pervasive than previously thought—perhaps afflicting more than half of all potential users.

[…]

Garrido and his team decided to run their own study, recruiting 92 people to try the same VR program that first made him sick.

[…]

In sharp contrast to the 2 percent estimate Garrido had been told, the results from his study, published earlier this year, indicated that more than 65 percent of people experienced symptoms of cybersickness, and more than one-third of these people experienced severe symptoms. Twenty-two participants decided to stop the simulation before the 10 minutes were up.

[…]

Cybersickness doesn’t just arise from the controls of a VR experience. It can be built into the fabric of hardware (individual headsets) and software (experiences, apps, and simulations). Kyle Ringgenberg, an AR and VR developer and the co-founder of software company Dimension X, said that there are two major sensory conflicts that lead to cybersickness in VR. The first is the same brain-body mismatch that leads to car and seasickness, but the second is a different physiological response—and potentially even harder to fix. When we look out at the world in front of us, our eyes automatically focus on an object based on its perceived distance from us. A VR headset projects images a set distance away from a viewer, but when a virtual object appears close, it may seem blurry since the person’s eyes are trying to focus on it as if it truly were.

[…]

Source: Cybersickness Could Spell an Early Death for the Metaverse and Virtual Reality

NVIDIA Builds AI That Creates 3D Objects for Virtual Worlds

The massive virtual worlds created by growing numbers of companies and creators could be more easily populated with a diverse array of 3D buildings, vehicles, characters and more — thanks to a new AI model from NVIDIA Research.

Trained using only 2D images, NVIDIA GET3D generates 3D shapes with high-fidelity textures and complex geometric details. These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing.

The generated objects could be used in 3D representations of buildings, outdoor spaces or entire cities, designed for industries including gaming, robotics, architecture and social media.

GET3D can generate a virtually unlimited number of 3D shapes based on the data it’s trained on. Like an artist who turns a lump of clay into a detailed sculpture, the model transforms numbers into complex 3D shapes.

With a training dataset of 2D car images, for example, it creates a collection of sedans, trucks, race cars and vans. When trained on animal images, it comes up with creatures such as foxes, rhinos, horses and bears. Given chairs, the model generates assorted swivel chairs, dining chairs and cozy recliners.

“GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, vice president of AI research at NVIDIA, who leads the Toronto-based AI lab that created the tool. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.”

[…]

GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. The larger, more diverse the training dataset it’s learned from, the more varied and detailed the output.

NVIDIA researchers trained GET3D on synthetic data consisting of 2D images of 3D shapes captured from different camera angles. It took the team just two days to train the model on around 1 million images using NVIDIA A100 Tensor Core GPUs.

[…]

GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.

Once creators export GET3D-generated shapes to a graphics application, they can apply realistic lighting effects as the object moves or rotates in a scene. By incorporating another AI tool from NVIDIA Research, StyleGAN-NADA, developers can use text prompts to add a specific style to an image, such as modifying a rendered car to become a burned car or a taxi, or turning a regular house into a haunted one.

[…]

Source: NVIDIA AI Research Helps Populate Virtual Worlds With 3D Objects | NVIDIA Blog

DNA nets capture COVID-19 virus in low-cost rapid-testing platform


Tiny nets woven from DNA strands cover the spike proteins of the virus that causes COVID-19 and give off a glowing signal in this artist’s rendering. Credit: Xing Wang, University of Illinois

Tiny nets woven from DNA strands can ensnare the spike protein of the virus that causes COVID-19, lighting up the virus for a fast-yet-sensitive diagnostic test—and also impeding the virus from infecting cells, opening a new possible route to antiviral treatment, according to a new study.

Researchers at the University of Illinois Urbana-Champaign and collaborators demonstrated the DNA nets’ ability to detect and impede COVID-19 in human cell cultures in a paper published in the Journal of the American Chemical Society.

“This platform combines the sensitivity of PCR and the speed and low cost of antigen tests,” said study leader Xing Wang, a professor of bioengineering and of chemistry at Illinois. “We need tests like this for a couple of reasons. One is to prepare for the next pandemic. The other reason is to track ongoing viral epidemics—not only coronaviruses, but also other deadly and economically impactful viruses like HIV or influenza.”

DNA is best known for its genetic properties, but it also can be folded into custom nanoscale structures that can perform functions or specifically bind to other structures much like proteins do. The DNA nets the Illinois group developed were designed to bind to the coronavirus spike protein—the structure that sticks out from the surface of the virus and binds to receptors on to infect them. Once bound, the nets give off a fluorescent signal that can be read by an inexpensive handheld device in about 10 minutes.

The researchers demonstrated that their DNA nets effectively targeted the spike protein and were able to detect the virus at very low levels, equivalent to the sensitivity of gold-standard PCR tests that can take a day or more to return results from a clinical lab.

The technique holds several advantages, Wang said. It does not need any special preparation or equipment, and can be performed at , so all a user would do is mix the sample with the solution and read it. The researchers estimated in their study that the method would cost $1.26 per test.

“Another advantage of this measure is that we can detect the entire virus, which is still infectious, and distinguish it from fragments that may not be infectious anymore,” Wang said. This not only gives patients and physicians better understanding of whether they are infectious, but it could greatly improve community-level modeling and tracking of active outbreaks, such as through wastewater.

In addition, the DNA nets inhibited the virus’s spread in live cell cultures, with the antiviral activity increasing with the size of the DNA net scaffold. This points to DNA structures’ potential as therapeutic agents, Wang said.

“I had this idea at the very beginning of the pandemic to build a platform for testing, but also for inhibition at the same time,” Wang said. “Lots of other groups working on inhibitors are trying to wrap up the entire virus, or the parts of the virus that provide access to antibodies. This is not good, because you want the body to form antibodies. With the hollow DNA net structures, antibodies can still access the virus.”

The DNA net platform can be adapted to other viruses, Wang said, and even multiplexed so that a single test could detect multiple viruses.

“We’re trying to develop a unified technology that can be used as a plug-and-play platform. We want to take advantage of DNA sensors’ high binding affinity, low limit of detection, low cost and rapid preparation,” Wang said.

The paper is titled “Net-shaped DNA nanostructures designed for rapid/sensitive detection and potential inhibition of the SARS-CoV-2 .”


More information: Neha Chauhan et al, Net-Shaped DNA Nanostructures Designed for Rapid/Sensitive Detection and Potential Inhibition of the SARS-CoV-2 Virus, Journal of the American Chemical Society (2022). DOI: 10.1021/jacs.2c04835

Source: DNA nets capture COVID-19 virus in low-cost rapid-testing platform

Fitbit accounts are being replaced by Google accounts

New Fitbit users will be required to sign-up with a Google account, from next year, while it also appears one will be needed to access some of the new features in years to come.

Google has been slowly integrating Fitbit into the fold since buying the company back in November 2019. Indeed, the latest products are now known as “Fitbit by Google”. However, as it currently stands, device owners have been able to maintain separate accounts for Google and Fitbit accounts.

Google has now revealed it is bringing Google Accounts to Fitbit in 2023, enabling a single login for both services. From that point on, all new sign ups will be through Google. Fitbit accounts will only be supported until 2025.

From that point on, a Google account will be the only way to go. To aid the transition, once the introduction of Google accounts begins, it’ll be possible to move existing devices over while maintaining all of the recorded data.

[…]

“We’ll be transparent with our customers about the timeline for ending Fitbit accounts through notices within the Fitbit app, by email, and in help articles.”

Whether that will be enough to assuage the concerns of the Fitbit user base – who didn’t have a say on whether Google bought their personal fitness data – remains to be seen.

Source: Fitbit accounts are being replaced by Google accounts | Trusted Reviews

So wonderful cloud – first of all, why should this data go to the cloud anyway? Second, you thought you were giving it to one provider but it turns out you’re giving it to another with no opt-out other than trashing an expensive piece of hardware.

Tiny swimming robots treat deadly pneumonia in mice

Nanoengineers at the University of California San Diego have developed microscopic robots, called microrobots, that can swim around in the lungs, deliver medication and be used to clear up life-threatening cases of bacterial pneumonia.

In mice, the microrobots safely eliminated pneumonia-causing bacteria in the lungs and resulted in 100% survival. By contrast, untreated mice all died within three days after infection.

The results are published Sept. 22 in Nature Materials.

The microrobots are made of algae cells whose surfaces are speckled with antibiotic-filled nanoparticles. The algae provide movement, which allows the microrobots to swim around and deliver antibiotics directly to more bacteria in the lungs. The nanoparticles containing the antibiotics are made of tiny biodegradable polymer spheres that are coated with the cell membranes of neutrophils, which are a type of white blood cell. What’s special about these cell membranes is that they absorb and neutralize inflammatory molecules produced by bacteria and the body’s immune system. This gives the microrobots the ability to reduce harmful inflammation, which in turn makes them more effective at fighting lung infection.

[…]

The team used the microrobots to treat mice with an acute and potentially fatal form of pneumonia caused by the bacteria Pseudomonas aeruginosa. This form of pneumonia commonly affects patients who receive mechanical ventilation in the intensive care unit. The researchers administered the microrobots to the lungs of the mice through a tube inserted in the windpipe. The infections fully cleared up after one week. All mice treated with the microrobots survived past 30 days, while untreated mice died within three days.

Treatment with the microrobots was also more effective than an IV injection of antibiotics into the bloodstream. The latter required a dose of antibiotics that was 3000 times higher than that used in the microrobots to achieve the same effect. For comparison, a dose of microrobots provided 500 nanograms of antibiotics per mouse, while an IV injection provided 1.644 milligrams of antibiotics per mouse.

The team’s approach is so effective because it puts the medication right where it needs to go rather than diffusing it through the rest of the body.

[…]

the researchers say that this approach is safe. After treatment, the body’s immune cells efficiently digest the algae, along with any remaining nanoparticles. “Nothing toxic is left behind,” said Wang.

[…]

Source: Tiny swimming robots treat deadly pneumonia i | EurekAlert!

Journal: Nanoparticle-modified microrobots for in vivo antibiotic delivery to treat acute bacterial pneumonia | nature materials

Meta ordered to pay $175 million in patent infringement case

A federal judge in Texas has ordered the company to pay Voxer, the developer of app called Walkie Talkie, nearly $175 million as an ongoing royalty. Voxer accused Meta of infringing its patents and incorporating that tech in Instagram Live and Facebook Live.

In 2006, Tom Katis, the founder of Voxer, started working on a way to resolve communications problems he faced while serving in the US Army in Afghanistan, as TechCrunch notes. Katis and his team developed tech that allows for live voice and video transmissions, which led to Voxer debuting the Walkie Talkie app in 2011.

According to the lawsuit, soon after Voxer released the app, Meta (then known as Facebook) approached the company about a collaboration. Voxer is said to have revealed its proprietary technology as well as its patent portfolio to Meta, but the two sides didn’t reach an agreement. Voxer claims that even though Meta didn’t have live video or voice services back then, it identified the Walkie Talkie developer as a competitor and shut down access to Facebook features such as the “Find Friends” tool.

Meta debuted Facebook Live in 2015. Katis claims to have had a chance meeting with a Facebook Live product manager in early 2016 to discuss the alleged infringements of Voxer’s patents in that product, but Meta declined to reach a deal with the company. The latter released Instagram Live later that year. “Both products incorporate Voxer’s technologies and infringe its patents,” Voxer claimed in the lawsuit.

[…]

Source: Meta ordered to pay $175 million in patent infringement case | Engadget

The World’s Largest Four-Day Work Week Experiment Shows Success

[…] In June, more than 3,300 employees across the United Kingdom began participating in a six-month experiment to test the efficacy of a four-day work week, which was organized by the nonprofit 4 Day Global. The pilot program has now reached its halfway point, and 4 Day Global is reporting overwhelmingly positive results. More specifically, 88% of surveyed participants said that the four-day work week is working well for their business.

[…]

Results also include 86% of survey respondents indicating that they would be likely or extremely likely to retain the four-day work week, while a total of 46% of respondents reported some increase in productivity. Businesses also reported a relatively smooth transition from the traditional five-day work week. On a scale of 1 being “extremely challenging” to 5 being “extremely smooth,” 4 Day Week Global found that 98% of respondents rated the transition to the four-day work week a 3 or higher.

Prior to the start of the experiment, 4 Day Week Global said that this is the biggest pilot program of its kind, where, as long as workers maintain 100% of their productivity, they will also maintain 100% of their salary while working 80% of the traditional work week. The nonprofit has been collaborating on the pilot program with labor think tank Autonomy as well as researchers from Cambridge University, Boston College, and Oxford University. Companies taking part in the experiment range from fish and chips shops, to PR firms, to tech companies.

[…]

“We are learning that for many it is a fairly smooth transition and for some there are some understandable hurdles – especially among those which have comparatively fixed or inflexible practices, systems, or cultures which date back well into the last century,” O’Connor said.

[…]

Microsoft flirted with a four-day work week in Japan and saw higher sales figures and levels of happiness in employees. The big hurdle moving forward will be getting buy in from enough companies and executives to make the four-day work week a permanent fixture in the world’s labor market—but results from large projects such as the one from 4 Day Week Global are only getting us closer to that end goal.

Source: The World’s Largest Four-Day Work Week Experiment Shows Success

This site tells you if photos of you were used to train the AI

[…] Spawning AI creates image-generation tools for artists, and the company just launched Have I Been Trained? which you can use to search a set of 5.8 billion images that have been used to train popular AI art models. When you search the site, you can search through the images that are the closest match, based on the LAION-5B training data, which is widely used for training AI search terms.

It’s a fun tool to play with, and may help give a glimpse into the data that the AI is using as the basis for its own. The photo at the top of this post is a screenshot of the search term “couple”. Try putting your own name in, and see what happens… I also tried a search for “Obama,” which I will not be sharing a screenshot of here, but suffice it to say that these training sets can be… Problematic.

An Ars Technica report this week reveals that private medical records — as many as thousands — are among the many photos hidden within LAION-5B with questionable ethical and legal statuses. Removing these records is exceptionally difficult, as LAION isn’t a collection of files itself but merely a set of URLs pointing to images on the web.

In response, technologists like Mat Dryhurst and Holly Herndon are spearheading efforts such as Source+, a standard aiming to allow people to disallow their work or likeness to be used for AI training purposes. But these standards are — and will likely remain — voluntary, limiting their potential impact.

Source: This site tells you if photos of you were used to train the AI | TechCrunch

Ask.FM database with 350m user records allegedly sold online

The listing allegedly includes 350 million Ask.FM user records, with the threat actor also offering 607 repositories plus their Gitlab, Jira, and Confluence databases. Ask.FM is a question and answer network launched in June 2010, with over 215 million registered users.

“I’m selling the users database of Ask.fm and ask.com. For connoisseurs, you can also get 607 repositories plus their Gitlab, Jira, Confluence databases.”

Ask.FM hack

The posting also includes a list of repositories, sample git, and sample user data, as well as mentions of the fields in the database: user_id, username, mail, hash, salt, fbid, twitterid, vkid, fbuid, iguid. It appears that Ask.FM is using the weak hashing algorithm SHA1 for passwords, putting them at risk of being cracked and exposed to threat actors.

[…]

In response to DataBreaches, the user who posted the database – Data – explained that initial access was gained via a vulnerability in Safety Center. The server was first accessed in 2019, and the database was obtained on 2020-03-14.

Data also suggested that Ask.FM knew about the breach as early as back in 2020.

Source: Ask.FM database with 350m user records allegedly sold online | Cybernews

US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data, Cookies from guy who helps run TOR

Multiple branches of the U.S. military have bought access to a powerful internet monitoring tool that claims to cover over 90 percent of the world’s internet traffic, and which in some cases provides access to people’s email data, browsing history, and other information such as their sensitive internet cookies, according to contracting data and other documents reviewed by Motherboard.

Additionally, Sen. Ron Wyden says that a whistleblower has contacted his office concerning the alleged warrantless use and purchase of this data by NCIS, a civilian law enforcement agency that’s part of the Navy, after filing a complaint through the official reporting process with the Department of Defense, according to a copy of the letter shared by Wyden’s office with Motherboard.

The material reveals the sale and use of a previously little known monitoring capability that is powered by data purchases from the private sector. The tool, called Augury, is developed by cybersecurity firm Team Cymru and bundles a massive amount of data together and makes it available to government and corporate customers as a paid service. In the private industry, cybersecurity analysts use it for following hackers’ activity or attributing cyberattacks. In the government world, analysts can do the same, but agencies that deal with criminal investigations have also purchased the capability. The military agencies did not describe their use cases for the tool. However, the sale of the tool still highlights how Team Cymru obtains this controversial data and then sells it as a business, something that has alarmed multiple sources in the cybersecurity industry.

“The network data includes data from over 550 collection points worldwide, to include collection points in Europe, the Middle East, North/South America, Africa and Asia, and is updated with at least 100 billion new records each day,” a description of the Augury platform in a U.S. government procurement record reviewed by Motherboard reads. It adds that Augury provides access to “petabytes” of current and historical data.

Motherboard has found that the U.S. Navy, Army, Cyber Command, and the Defense Counterintelligence and Security Agency have collectively paid at least $3.5 million to access Augury. This allows the military to track internet usage using an incredible amount of sensitive information. Motherboard has extensively covered how U.S. agencies gain access to data that in some cases would require a warrant or other legal mechanism by simply purchasing data that is available commercially from private companies. Most often, the sales center around location data harvested from smartphones. The Augury purchases show that this approach of buying access to data also extends to information more directly related to internet usage.

[…]

The Augury platform makes a wide array of different types of internet data available to its users, according to online procurement records. These types of data include packet capture data (PCAP) related to email, remote desktop, and file sharing protocols. PCAP generally refers to a full capture of data, and encompasses very detailed information about network activity. PCAP data includes the request sent from one server to another, and the response from that server too.

[…]

Augury also contains so-called netflow data, which creates a picture of traffic flow and volume across a network. That can include which server communicated with another, which is information that may ordinarily only be available to the server owner themselves or to the internet service provider that is carrying the traffic. That netflow data can be used for following traffic through virtual private networks, and show the server they are ultimately connecting from.

[…]

Team Cymru obtains this netflow data from ISPs; in return, Team Cymru provides the ISPs with threat intelligence. That transfer of data is likely happening without the informed consent of the ISPs’ users. A source familiar with the netflow data previously told Motherboard that “the users almost certainly don’t [know]” their data is being provided to Team Cymru, who then sells access to it.

It is not clear where exactly Team Cymru obtains the PCAP and other more sensitive information, whether that’s from ISPs or another method.

[…]

Beyond his day job as CEO of Team Cymru, Rabbi Rob Thomas also sits on the board of the Tor Project, a privacy focused non-profit that maintains the Tor software. That software is what underpins the Tor anonymity network, a collection of thousands of volunteer-run servers that allow anyone to anonymously browse the internet.

“Just like Tor users, the developers, researchers, and founders who’ve made Tor possible are a diverse group of people. But all of the people who have been involved in Tor are united by a common belief: internet users should have private access to an uncensored web,” the Tor Project’s website reads.

[…]

Source: Revealed: US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data

Somehow This Video Game Belly Button Was Too Sexy For Google

Just a few weeks after Hook Up: The Game released on Android, developer Sophie Artemigi was surprised to see the visual novel flagged for inappropriate sexual content.

By the game’s own description, you play as Alex, “a sex positive twenty-something” who matches with her old high school bully on a dating app, so of course, sexual themes are part of the package. But inappropriate? That was unexpected.

Google Play does warn developers that content designed to be “sexually gratifying” is not allowed on the platform, but it can be tricky to know how exactly that’s being enforced. Take 7 Sexy Sins, for example, a game which has the player removing the armor from anime demon girls, only to “snap some pictures… for personal uses”. It’s got an age rating of 12+ and has been downloaded more than 10,000 times without being pulled from the platform.

By contrast, Hook Up: The Game is a narrative game about dating, relationships and learning to deal with past trauma.

Artemigi appealed the decision to find out exactly what had crossed the line in this case.

In response she was told that Google “don’t allow apps that contain or promote sexual content or profanity”, or “appear to promote a sexual act in exchange for compensation”.

“For example”, the response continued, “your app screenshots currently contain an image that depicts sexually suggestive poses and sexual nudity”.

The following image was included as proof, with red rectangles drawn over the offending content.

An image from Hook-Up: The Game, a visual novel that explores dating and trauma, out on Android. The picture points out the places where Google thought the game was too suggestive, which includes both breasts and belly button.
Image: Sophie Artemigi

You’ll note that the character’s breasts have been highlighted, but so has her belly button, which is just totally bizarre. Accordingly, Artemigi emailed back with her counterarguments.

First of all, Hook Up has nothing to do with sexual acts being performed in “exchange for compensation”, she explained. In an email shown to Kotaku, Artemigi asked why Google was conflating provocatively dressed women with sex workers?

As for the image itself, Artemigi argued that it’s meant to be reflective of the kind of pictures you might find on a dating app, which typically do not allow for pictures that are too revealing. It’s worth clarifying that Alex is not nude in this screenshot, but even if she was, the Play Store’s own policy states that nudity “may be allowed if the primary purpose is educational, documentary, scientific or artistic, and is not gratuitous”.

The illustration, Artemigi pointed out, was a direct reference to the statue of Napoleon’s sister and imperial princess, Pauline Boneparte, which you can see for yourself in Rome’s Galleria Borghese. It’s also pictured at the top of this article.

“That pose was specifically based on classical statues because there’s a reference to Alex feeling like her bully was this Greek god,” said Artemigi. “It’s meant to be about objectifying yourself and finding beauty in one’s self.”

But hey, sex is complicated and so, perhaps, are belly buttons.

After receiving another short reply stating that the screenshot depicts a “sexually nude and gratifying pose of a woman presented in a non-artistic way”, Artemigi asked to escalate the issue to somebody higher up in the policy team in the hopes of speaking to somebody who might appreciate the nuance of the situation.

The final response from her official Google contact once again pointed out that Hook Up was in violation of the platform’s policy, but this time ended with the following sentence:

“Regarding your concern about escalation, I am the highest form of escalation. Next to me is God. Do you wanna see God?”

Yikes.

“It was almost nice though,” said Artemigi, “because it kind of confirmed the vibe I’d been getting. I felt very dismissed, talked down to. At least they were honest in that one email, I’ll give them that.”

When asked for comment, Google told Kotaku that the person who wrote this email has now been removed from the developer support team.

Hook Up: The Game is still available to purchase on the Play Store, although it seemingly remains in breach of the company’s policy, meaning that Artemigi hasn’t been able to publish updates as she usually would.

It’s unclear whether this will have also affected the game’s standing on the platform, but it’s worth noting that despite hundreds of downloads and almost 40 reviews, searching “Hook Up: The Game” on the Play Store doesn’t bring up the game in my search results. Like, at all.

In fact, the only way I was able to find it via search was to use the full name of the developer.

There’s been no such problems over on iOS, although different screenshots are being used to market the game for that platform.

Source: Somehow This Video Game Belly Button Was Too Sexy For Google

Posted in Sex

Meta sued for allegedly secretly tracking iPhone users

Meta was sued on Wednesday for alleged undisclosed tracking and data collection in its Facebook and Instagram apps on Apple iPhones.

The lawsuit [PDF], filed in a US federal district court in San Francisco, claims that the two applications incorporate use their own browser known as a WKWebView that injects JavaScript code to gather data that would otherwise be unavailable if the apps opened links in the default standalone browser designated by iPhone users.

The claim is based on the findings of security researcher Felix Krause, who last month published an analysis of how WKWebView browsers embedded within native applications can be abused to track people and violate privacy expectations.

“When users click on a link within the Facebook app, Meta automatically directs them to the in-app browser it is monitoring instead of the smartphone’s default browser, without telling users that this is happening or they are being tracked,” the complaint says.

“The user information Meta intercepts, monitors and records includes personally identifiable information, private health details, text entries, and other sensitive confidential facts.”

[…]

However, Meta’s use of in-app browsers in its mobile apps predates Apple’s ATT initiative. Apple introduced WKWebView at its 2014 Worldwide Developer Conference as a replacement for its older UIWebView (UIKit) and WebView (AppKit) frameworks. That was in iOS 8. With the arrival of iOS 9, as described at WWDC 2015, there was another option, SFSafariViewController. Presently this is what’s recommended for displaying a website within an app.

And the company’s use of in-app browsers has elicited concern before.

“On top of limited features, WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB [in-app browser] developer can arbitrarily inject JavaScript code and also intercept network traffic,” wrote Thomas Steiner, a Google developer relations engineer, in a blog post three years ago.

In his post, Steiner emphasizes that he didn’t see anything unusual like a “phoning home” function.

Krause has taken a similar line, noting only the potential for abuse. In a follow-up post, he identified additional data gathering code.

He wrote, “Instagram iOS subscribes to every tap on any button, link, image or other component on external websites rendered inside the Instagram app” and also “subscribes to every time the user selects a UI element (like a text field) on third party websites rendered inside the Instagram app.”

However, “subscribes” simply means that analytics data is accessible within the app, without offering any conclusion about what, if anything, is done with the data. Krause also points out that since 2020, Apple has offered a framework called WKContentWorld that isolates the web environment from scripts. Developers using an in-app browser can implement WKContentWorld in order to make scripts undetectable from the outside, he said.

Whatever Meta is doing internally with its in-app browser, and even given the company’s insistence its injected script validates ATT settings, the plaintiffs suing the company argue there was no disclosure of the process.

“Meta fails to disclose the consequences of browsing, navigating, and communicating with third-party websites from within Facebook’s in-app browser – namely, that doing so overrides their default browser’s privacy settings, which users rely on to block and prevent tracking,” the complaint says. “Similarly, Meta conceals the fact that it injects JavaScript that alters external third-party websites so that it can intercept, track, and record data that it otherwise could not access.”

[…]

Source: Meta sued for allegedly secretly tracking iPhone users • The Register

Study Shows That Copyright Filters Harm Creators Rather Than Help Them

The EU Copyright Directive contains one of the worst ideas in modern copyright: what amounts to a requirement to filter uploads on major sites.  Despite repeated explanations of why this would cause huge harm to both creators and members of the public, EU politicians were taken in by the soothing words of the legislation’s proponents, who even went so far as to deny that upload filters would be required at all.

The malign effects of the EU Copyright Directive have not yet been felt, as national legislatures struggle to implement a law with deep internal contradictions.  However, upload filters are already used on an ad hoc basis, for example YouTube’s Content ID.  There is thus already mounting evidence of the problems with the approach.   A new report, from the Colombian Fundación Karisma, adds to the concerns by providing additional examples of how creators have already suffered from upload filters:

This research found multiple cases of unjustified notifications of supposed violation of copyright directed at content that is either part of the public domain, original content, or instances of judicial overreach of copyright law. The digital producers that are the target of these unjust notifications affirm that the appeal process and counter-notification procedures don’t help them protect their rights. The appeals interface of the different platforms that were taken into account did not help resolve the cases, which leaves digital creators defenseless with no alternative other than what they can obtain from their contacts. This system damages the capacity of these producers to grow, maintain and monetize an audience at the same time that it affects the liberty of expression of independent producers as it creates a strong disincentive for them. On the contrary, this system incentivizes the bigger production companies to claim copyright on content to which they hold no rights.

As that summary notes, it’s not just that material was blocked without justification. Compounding the problem are appeal processes that are biased against creators, and a system that is rigged in favor of Big Content to the point where companies can falsely claim copyright on the work of others. The Fundación Karisma report is particularly valuable because it describes what has been happening in Colombia, rounding out other work that typically looks at the situation in the US and EU.

Source: Study Shows That Copyright Filters Harm Creators Rather Than Help Them | Techdirt