Judge dismisses most of artists’ AI copyright lawsuits against Midjourney, Stability AI

judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies’ generative artificial intelligence systems.

U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. The judge said the artists could file an amended complaint against the two companies, whose systems utilize Stability’s Stable Diffusion text-to-image technology.

Orrick also dismissed McKernan and Ortiz’s copyright infringement claims entirely. The judge allowed Andersen to continue pursuing her key claim that Stability’s alleged use of her work to train Stable Diffusion infringed her copyrights.

The same allegation is at the heart of other lawsuits brought by artists, authors and other copyright owners against generative AI companies.

“Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture,” Orrick said.

The artists’ attorneys Joseph Saveri and Matthew Butterick said in a statement that their “core claim” survived, and that they were confident that they could address the court’s concerns about their other claims in an amended complaint to be filed next month.

A spokesperson for Stability declined to comment on the decision. Representatives for Midjourney and DeviantArt did not immediately respond to requests for comment.

The artists said in their January complaint that Stability used billions of images “scraped” from the internet, including theirs, without permission to teach Stable Diffusion to create its own images.

Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists’ copyrights. He allowed the claims to be amended but said he was “not convinced” that allegations based on the systems’ output could survive without showing that the images were substantially similar to the artists’ work.

The judge also dismissed other claims from the artists, including that the companies violated their publicity rights and competed with them unfairly, with permission to refile.

Orrick dismissed McKernan and Ortiz’s copyright claims because they had not registered their images with the U.S. Copyright Office, a requirement for bringing a copyright lawsuit.

The case is Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.

For the artists: Joseph Saveri of Joseph Saveri Law Firm; and Matthew Butterick

For Stability: Paul Schoenhard of Fried Frank Harris Shriver & Jacobson

For Midjourney: Angela Dunning of Cleary Gottlieb Steen & Hamilton

For DeviantArt: Andy Gass of Latham & Watkins

Read more:

Lawsuits accuse AI content creators of misusing copyrighted work

AI companies ask U.S. court to dismiss artists’ copyright lawsuit

US judge finds flaws in artists’ lawsuit against AI companies

Source: Judge pares down artists’ AI copyright lawsuit against Midjourney, Stability AI | Reuters

These suits are absolute nonsense. It’s like suing a person for having seen some art and made something a bit like it. It’s not very surprising that this has been wiped off the table.

Drugmakers Are Set To Pay 23andMe Millions To Access Your DNA – which is also your families DNA

GSK will pay 23andMe $20 million for access to the genetic-testing company’s vast trove of consumer DNA data, extending a five-year collaboration that’s allowed the drugmaker to mine genetic data as it researches new medications.

Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body’s immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK’s new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Source: Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA – Slashdot

So – you paid for a DNA test and it turns out you didn’t think of the privacy aspect at all. Neither did you think up that you gave up your families DNA. Or that you can’t actually change your DNA either. Well done. It’s being spread all over the place. And no, the data is not anonymous – DNA is the most personal information you can give up ever.

Particle Accelerator can now be built on a Chip

Particle accelerators range in size from a room to a city. However, now scientists are looking closer at chip-sized electron accelerators, a new study finds. Potential near-term applications for the technology include radiation therapy for zapping skin cancer and, longer-term, new kinds of laser and light sources.

Particle accelerators generally propel particles within metal tubes or rings. The rate at which they can accelerate particles is limited by the peak fields the metallic surfaces can withstand. Conventional accelerators range in size from a few meters for medical applications to kilometers for fundamental research. The fields they use are often on the scale of millions of volts per meter.

In contrast, electrically insulating dielectric materials (stuff that doesn’t conduct electricity well but does support electrostatic fields well) can withstand light fields thousands of times stronger. This has led scientists to investigate creating dielectric accelerators that rely on lasers to hurl particles.

[…]

physicists fabricated a tiny channel 225 nanometers wide and up to 0.5 millimeters long. An electron beam entered one end of the channel and exited the other end.

The researchers shone infrared laser pulses 250 femtoseconds long on top of the channel to help accelerate electrons down it. Inside the channel, two rows of up to 733 silicon pillars, each 2 micrometers high, interacted with these laser pulses to generate accelerating forces.

The electrons entered the accelerators with an energy of 28,400 electron-volts, traveling at roughly one-third the speed of light. They exited it with an energy of 40,700 electron-volts, a 43 percent boost in energy.

This new type of particle accelerator can be built using standard cleanroom techniques, such as electron beam lithography. “This is why we think that our results represent a big step forward,” Hommelhoff says. “Everyone can go ahead and start engineering useful machines from this.”

[…]

Applications for these nanophotonic electron accelerators depend on the energies they can reach. Electrons of up to about 300,000 electron-volts are typical for electron microscopy, Hommelhoff says. For treatment of skin cancer, 10 million electron-volt electrons are needed. Whereas such medical applications currently require an accelerator 1 meter wide, as well as additional large, heavy and expensive parts to help drive the accelerator, “we could in principle get rid of both and have just a roughly 1-centimeter chip with a few extra centimeters for the electron source,” adds study lead author Tomáš Chlouba, a physicist at the University of Erlangen-Nuremberg in Germany.

Applications such as synchrotron light sources, free electron lasers, and searches for lightweight dark matter appear with billion electron-volt electrons. With trillion electron-volt electrons, high-energy colliders become possible, Hommelhoff says.

The scientists note there are many ways to improve their device beyond their initial proof-of-concept structures. They now aim to experiment with greater acceleration and higher electron currents to help enable applications, as well as boosting output by fabricating many accelerator channels next to each other that can all be driven by the same laser pulses.

In addition, although the new study experimented with structures made from silicon due to the relative ease of working with it, “silicon is not really a high-damage threshold material,” Hommelhoff says. Structures made of glass or other materials may allow much stronger laser pulses and thus more powerful acceleration, he says.

The researchers are interested in building a small-scale accelerator, “maybe with skin cancer treatment applications in mind first,” Hommelhoff says. “This is certainly something that we should soon transfer to a startup company.”

The scientists detailed their findings in the 19 October issue of the journal Nature.

Source: Particle Accelerator on a Chip Hits Penny-Size – IEEE Spectrum

Google CEO Defends Paying $26b in 2021 to Remain Top Search Engine

Google CEO Sundar Pichai upheld the company’s decision to pay out billions of dollars to remain the top global search engine at the U.S. anti-trust trial on Monday, according to a report from The Wall Street Journal. Pichai claimed he tried to give users a “seamless and easy” experience, even if it meant paying Apple and other tech companies an exorbitant fee.

The U.S. Department of Justice is arguing that Google created the building blocks to hold a monopoly over the market, but Pichai disagrees, saying the company is the dominant search engine because it is better than its competitors.

“We realized early on that browsers are critical to how people are able to navigate and use the web,” Pichai said during questioning, as reported by The Journal. “It became very clear early on that if you make the user’s experience better, they would use the web more, they would enjoy using the web more, and they would search more in Google as well.”

Pichai testified that Google’s payments to phone companies and manufacturers were meant to push them toward more security upgrades and not just enabling Google to be the primary search engine.

Internal emails between Pichai and his colleagues in 2007 were shared during the cross-examination revealing Google’s insistence to be Apple’s default search engine. Pichai says he was worried about being the only search engine and requested a Yahoo backup version.

Google paid Apple a reported $18 billion to remain the default search engine on its Macs, iPhones, and iPads in 2021, and paid tech companies a grand total of $26 billion in 2021 alone, according to court documents.

[…]

Source: Google CEO Defends Paying Billions to Remain Top Search Engine

Apple says BMW wireless chargers really are messing with iPhone 15s

Users have been reporting that their iPhone 15’s NFC chips were failing after using BMW’s in-car wireless charging, but until now, Apple hasn’t addressed the complaints. That seems to have changed as MacRumors reported this week that an Apple internal memo to third-party repair providers says a software update later this year should prevent a “small number” of in-car wireless chargers from “temporarily” disabling iPhone 15 NFC chips.

Apple reportedly says that until the fix comes out, anyone who experiences this should not use the wireless charger in their car. Users have been complaining about BMW wireless chargers breaking Apple Pay and the BMW digital key feature in posts on Reddit, Apple’s Support community, and MacRumors’ own forums.

BMW seemed to acknowledge the issue early this month when the BMW UK X account replied to a complaint earlier this month saying the company is working with Apple to investigate the issue. There’s no easy way to know which models are affected, so for now, if you have a BMW or a Toyota Supra with a wireless charger, it’s probably best to just avoid using it until the problem is fixed.

Source: Apple says BMW wireless chargers really are messing with iPhone 15s – The Verge

IoT standard Matter 1.2 released

[…] Matter, version 1.2, is now available for device makers and platforms to build into their products. It is packed with nine new device types, revisions, and additions to existing categories, core improvements to the specification and SDK, and certification and testing tools. The Matter 1.2 certification program is now open and members expect to bring these enhancements and new device types to market later this year and into 2024 and beyond.

[…]

The new device types supported in Matter 1.2 include:

  1. Refrigerators – Beyond basic temperature control and monitoring, this device type is also applicable to other related devices like deep freezers and even wine and kimchi fridges.
  2. Room Air Conditioners – While HVAC and thermostats were already part of Matter 1.0, stand alone Room Air Conditioners with temperature and fan mode control are now supported.
  3. Dishwashers – Basic functionality is included, like remote start and progress notifications. Dishwasher alarms are also supported, covering operational errors such as water supply and drain, temperature, and door lock errors.
  4. Laundry Washers – Progress notifications, such as cycle completion, can be sent via Matter. Dryers will be supported in a future Matter release.
  5. Robotic Vacuums – Beyond the basic features like remote start and progress notifications, there is support for key features like cleaning modes (dry vacuum vs wet mopping) and additional status details (brush status, error reporting, charging status).
  6. Smoke & Carbon Monoxide Alarms – These alarms will support notifications and audio and visual alarm signaling. Additionally, there is support for alerts about battery status and end-of-life notifications. These alarms also support self-testing. Carbon monoxide alarms support concentration sensing, as an additional data point.
  7. Air Quality Sensors –  Supported sensors can capture and report on: PM1, PM 2.5, PM 10, CO2, NO2, VOC, CO, Ozone, Radon, and Formaldehyde. Furthermore, the addition of the Air Quality Cluster enables Matter devices to provide AQI information based on the device’s location.
  8. Air Purifiers – Purifiers utilize the Air Quality Sensor device type to provide sensing information and also include functionality from other device types like Fans (required) and Thermostats (optional). Air purifiers also include consumable resource monitoring, enabling notifications on filter status (both HEPA and activated carbon filters are supported in 1.2).
  9. Fans –Matter 1.2 includes support for fans as a separate, certifiable device type. Fans now support movements like rock/oscillation and new modes like natural wind and sleep wind. Additional enhancements include the ability to change the airflow direction (forward and reverse) and step commands to change the speed of airflow. […]

Core improvements to the Matter 1.2 specification include:

  • Latch & Bolt Door Locks – Enhancements for European markets that capture the common configuration of a combined latch and bolt lock unit.
  • Device Appearance – Added description of device appearance, so that devices can describe their color and finish. This will enable helpful representations of devices across clients.
  • Device & Endpoint Composition – Devices can now be hierarchically composed from complex endpoints allowing for accurate modeling of appliances, multi-unit switches, and multi-light fixtures.
  • Semantic Tags – Provide an interoperable way to describe the location and semantic functions of generic Matter clusters and endpoints to enable consistent rendering and application across the different clients. For example, semantic tags can be used to represent the location and function of each button on a multi-button remote control.
  • Generic Descriptions of Device Operational States – Expressing the different operational modes of a device in a generic way will make it easier to generate new device types in future revisions of Matter and ensure their basic support across various clients.
Under-the-Hood Enhancements: Matter SDK & Test Harness

Matter 1.2 brings important enhancements in the testing and certification program which helps companies bring products – hardware, software, chipsets and apps – to market faster. These improvements will benefit the wider developer community and ecosystem around Matter.

  • New Platform Support in SDK – Matter 1.2 SDK is now available for new platforms providing more ways for developers to build new products for Matter.
  • Enhancements to the Matter Test Harness – The Test Harness is a critical piece for ensuring the specification and its features are being implemented correctly. The Test Harness is now available via open source, making it easier for Matter developers to contribute to the tools (to make them better), and to ensure they are working with the latest version (with all features and bug fixes.

[…]

Developers interested in learning more about these enhancements can access the following resources:

[…]

Source: Matter 1.2 Arrives with Nine New Device Types & – CSA-IOT

iLeakage hack can force iOS and macOS browsers to divulge passwords and much more

Researchers have devised an attack that forces Apple’s Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices.

 

iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a wide corpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers—primarily Intel and, to a lesser extent, AMD—scrambling to devise mitigations.

Exploiting WebKit on Apple silicon

The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker’s choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox—when a target is logged in—and a password as it’s being autofilled by a credential manager. Once visited, the iLeakage site requires about five minutes to profile the target machine and, on average, roughly another 30 seconds to extract a 512-bit secret, such as a 64-character string.

Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Enlarge / Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Kim, et al.

“We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution,” the researchers wrote on an informational website. “In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.”

[…]

For the attack to work, a vulnerable computer must first visit the iLeakage website. For attacks involving YouTube, Gmail, or any other specific Web property, a user should be logged into their account at the same time the attack site is open. And as noted earlier, the attacker website needs to spend about five minutes probing the visiting device. Then, using the window.open JavaScript method, iLeakage can cause the browser to open any other site and begin siphoning certain data at anywhere from 24 to 34 bits per second.

[…]

iLeakage is a practical attack that requires only minimal physical resources to carry out. The biggest challenge—and it’s considerable—is the high caliber of technical expertise required. An attacker needs to not only have years of experience exploiting speculative execution vulnerabilities in general but also have fully reverse-engineered A- and M-series chips to gain insights into the side channel they contain. There’s no indication that this vulnerability has ever been discovered before, let alone actively exploited in the wild.

That means the chances of this vulnerability being used in real-world attacks anytime soon are slim, if not next to zero. It’s likely that Apple’s scheduled fix will be in place long before an iLeakage-style attack site does become viable.

Source: Hackers can force iOS and macOS browsers to divulge passwords and much more | Ars Technica

Hackers Target European Government With Roundcube Webmail Bug

Winter Vivern, believed to be a Belarus-aligned hacker, attacked European government entities and a think tank starting on Oct. 11, according to an Ars Technica report Wednesday. ESET Research discovered the hack that exploited a zero-day vulnerability in Roundcube, a webmail server with millions of users, and allowed the pro-Russian group to exfiltrate sensitive emails.

Roundcube patched the XSS vulnerability on Oct. 14, two days after ESET Research reported it. Winter Vivern sent malicious code to users disguised in an innocent-looking email from team.management@outlook.com. Users simply viewed the message in a web browser, and the hacker could access all their emails. Winter Vivern is a cyberespionage group that has been active since at least 2020 targeting governments in Europe and Central Asia.

“Despite the low sophistication of the group’s toolset, it is a threat to governments in Europe because of its persistence, very regular running of phishing campaigns,” said Matthieu Faou, a malware researcher at ESET, in a post.

Roundcube released an update for multiple versions of its software on Oct. 16 fixing the cross-site scripting vulnerabilities. Despite the patch and known vulnerabilities in older versions, many applications don’t get updated by users, says Faou.

[…]

Source: Hackers Target European Government With Roundcube Webmail Bug

Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Last week, privacy advocate (and very occasional Reg columnist) Alexander Hanff filed a complaint with the Irish Data Protection Commission (DPC) decrying YouTube’s deployment of JavaScript code to detect the use of ad blocking extensions by website visitors.

On October 16, according to the Internet Archives’ Wayback Machine, Google published a support page declaring that “When you block YouTube ads, you violate YouTube’s Terms of Service.”

“If you use ad blockers,” it continues, “we’ll ask you to allow ads on YouTube or sign up for YouTube Premium. If you continue to use ad blockers, we may block your video playback.”

YouTube’s Terms of Service do not explicitly disallow ad blocking extensions, which remain legal in the US [PDF], in Germany, and elsewhere. But the language says users may not “circumvent, disable, fraudulently engage with, or otherwise interfere with any part of the Service” – which probably includes the ads.

Image of 'Ad blockers are not allowed' popup

Image of ‘Ad blockers are not allowed’ popup – Click to enlarge

YouTube’s open hostility to ad blockers coincides with the recent trial deployment of a popup notice presented to web users who visit the site with an ad-blocking extension in their browser – messaging tested on a limited audience at least as far back as May.

In order to present that popup YouTube needs to run a script, changed at least twice a day, to detect blocking efforts. And that script, Hanff believes, violates the EU’s ePrivacy Directive – because YouTube did not first ask for explicit consent to conduct such browser interrogation.

[…]

Asked how he hopes the Irish DPC will respond, Hanff replied via email, “I would expect the DPC to investigate and issue an enforcement notice to YouTube requiring them to cease and desist these activities without first obtaining consent (as per [Europe’s General Data Protection Regulation (GDPR)] standard) for the deployment of their spyware detection scripts; and further to order YouTube to unban any accounts which have been banned as a result of these detections and to delete any personal data processed unlawfully (see Article 5(1) of GDPR) since they first started to deploy their spyware detection scripts.”

Hanff’s use of strikethrough formatting acknowledges the legal difficulty of using the term “spyware” to refer to YouTube’s ad block detection code. The security industry’s standard defamation defense terminology for such stuff is PUPs, or potentially unwanted programs.

[…]

Hanff’s contention that ad-blocker detection without consent is unlawful in the EU was challenged back in 2016 by the maker of a detection tool called BlockAdblock. The software maker’s argument is that JavaScript code is not stored in the way considered in Article 5(3), which the firm suggests was intended for cookies.

Hanff disagrees, and maintains that “The Commission and the legislators have been very clear that any access to a user’s terminal equipment which is not strictly necessary for the provision of a requested service, requires consent.

“This is also bound by CJEU Case C-673/17 (Planet49) from October 2019 which *all* Member States are legally obligated to comply with, under the [Treaty on the Functioning of the European Union] – there is no room for deviation on this issue,” he elaborated.

“If a script or other digital technology is strictly necessary (technically required to deliver the requested service) then it is exempt from the consent requirements and as such would pose no issue to publishers engaging in legitimate activities which respect fundamental rights under the Charter.

“It is long past time that companies meet their legal obligations for their online services,” insisted Hanff. “This has been law since 2002 and was further clarified in 2009, 2012, and again in 2019 – enough is enough.”

Google did not respond to a request for comment.

Source: Privacy advocate challenges YouTube’s ad blocking detection • The Register

Airbus commissions three wind-powered ships

The plane-maker on Thursday revealed it has “commissioned shipowner Louis Dreyfus Armateurs to build, own and operate these new, highly efficient vessels that will enter into service from 2026.”

The ships will have conventional engines that run on maritime diesel oil and e-methanol, the latter fuel made with a process that produces less CO2 than other efforts. Many ships run on heavy fuel oil, the gloopiest, dirtiest, and cheapest of the fuel oils. Airbus has therefore gone out of its way with the choice of diesel and e-methanol.

The ships will also feature half a dozen Flettner rotors, rotating cylinders that produce the Magnus effect – a phenomenon that produces lift thanks to pressure differences on either side of a rotating object. The rotors were invented over a century ago and are generating renewed interest as they reduce ships’ fuel requirements.

Here’s what they’ll look like on Airbus’s boats.

Airbus's future ocean transports

Airbus’s future ocean transports – Click to enlarge

Airbus expects its three vessels to enter service from 2026 and has calculated they will reduce its average annual transatlantic CO2 emissions from 68,000 to 33,000 tonnes by 2030.[…]

The craft will have capacity to move around seventy 40-foot containers and six single-aisle aircraft sub assembly sets – wings, fuselage, engine pylons, horizontal and vertical tail planes. Airbus’s current ships can only move three or four of those sets.

The ships will most often travel from Saint-Nazaire, France, to an A320 assembly line in Mobile, Alabama. […]

Source: Airbus commissions three wind-powered ships • The Register

Apple’s MAC Address Privacy Feature Has Never Worked

Ever since Apple re-branded as the “Privacy” company several years back, it’s been rolling out features designed to show its commitment to protecting users. Yet while customers might feel safer using an iPhone, there’s already plenty of evidence that Apple’s branding efforts don’t always match the reality of its products. In fact, a lot of its privacy features don’t actually seem to work.

Case in point: new research shows that one of Apple’s proffered privacy tools—a feature that was supposed to anonymize mobile users’ connections to Wifi—is effectively “useless.” In 2020, Apple debuted a feature that, when switched on, was supposed to hide an iPhone user’s media access control—or MAC—address. When a device connects to a WiFi network, it must first send out its MAC address so the network can identify it; when the same MAC address pops up in network after network, it can be used to by network observers to identify and track a specific mobile user’s movements.

Apple’s feature was supposed to provide randomized MAC addresses for users as a way of stop this kind of tracking from happening. But, apparently, a bug in the feature persisted for years that made the feature effectively useless.

According to a new report from Ars Technica, researchers recently tested the feature to see if it actually concealed their MAC addresses, only to find that it didn’t do that at all. Ars writes:

Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

One of the researchers behind the discovery of the vulnerability, Tommy Mysk, told Ars that, from the jump, “this feature was useless because of this bug,” and that, try as they might, he “couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

What Apple’s justification for advertising a feature that just plainly does not work is, I’m not sure. Gizmodo reached out to the company for comment and will update this story if they respond. A recent update, iOS 17.1, apparently patches the problem and ensures that the feature actually works.

Source: Apple’s MAC Address Privacy Feature Has Never Worked

Android 14 Storage Bug: Users with multiple profiles Locked Out of Devices

Android 14, the latest operating system from Google, is facing a major storage bug that is causing users to be locked out of their devices. This issue is particularly affecting users who utilize the “multiple profiles” feature. Reports suggest that the bug is comparable to being hit with “ransomware,” as users are unable to access their device storage.

Initially, it was believed that this bug was limited to the Pixel 6, but it has since been discovered that it impacts a wider range of devices upgrading to Android 14. This includes the Pixel 6, 6a, 7, 7a, Pixel Fold, and Pixel Tablet. The Google issue tracker for this bug has garnered over 350 replies, but there has been no response from Google so far. The bug has been assigned the medium priority level of “P2” and remains unassigned, indicating that no one is actively investigating it.

Users who have encountered this storage bug have shared log files containing concerning messages such as “Failed to open directory /data/media/0: Structure needs cleaning.” This issue leads to various problematic situations, with some users experiencing boot loops, others stuck on a “Pixel is starting…” message, and some unable to take screenshots or access their camera app due to the lack of storage. Users are also unable to view files on their devices from a PC over USB, and the System UI and Settings repeatedly crash. Essentially, without storage, the device becomes practically unusable.

Android’s user-profile system, designed to accommodate multiple users and separate work and personal profiles, appears to be the cause of this rarely encountered bug. Users have reported that the primary profile, which is typically the most important one, becomes locked out.

Source: Android 14 Storage Bug: Users Locked Out of Devices

Google turned ANC earbuds into heart rate sensor

Google today detailed its research into audioplethysmography (APG) that adds heart rate sensing capabilities to active noise canceling (ANC) headphones and earbuds “with a simple software upgrade.”

Google says the “ear canal [is] an ideal location for health sensing” given that the deep ear artery “forms an intricate network of smaller vessels that extensively permeate the auditory canal.”

This audioplethysmography approach works by “sending a low intensity ultrasound probing signal through an ANC headphone’s speakers.”

This signal triggers echoes, which are received via on-board feedback microphones. We observe that the tiny ear canal skin displacement and heartbeat vibrations modulate these ultrasound echoes.

A model that Google created works to process that feedback into a heart rate reading, as well as heart rate variability (HRV) measurement. This technique works even with music playing and “bad earbuds seals.” However, it was impacted by body motion, and Google countered with a multi-tone approach that serves as a calibration tool to “find the best frequency that measures heart rate, and use only the best frequency to get high-quality pulse waveform.”

Google performed two sets of studies with 153 people that found APG “achieves consistently accurate heart rate (3.21% median error across participants in all activity scenarios) and heart rate variability (2.70% median error in inter-beat interval) measurements.”

Compared to existing HR sensors, it’s not impacted by skin tones. Ear canal size and “sub-optimal seal conditions” also do not impact accuracy. Google believes this is a better approach than putting traditional photoplethysmograms (PPG) and electrocardiograms (ECG) sensors, as well as a microcontroller, in headphones/earbuds:

…this sensor mounting paradigm inevitably adds cost, weight, power consumption, acoustic design complexity, and form factor challenges to hearables, constituting a strong barrier to its wide adoption.

Google closes on:

APG transforms any TWS ANC headphones into smart sensing headphones with a simple software upgrade, and works robustly across various user activities. The sensing carrier signal is completely inaudible and not impacted by music playing. More importantly, APG represents new knowledge in biomedical and mobile research and unlocks new possibilities for low-cost health sensing.

“APG is the result of collaboration across Google Health, product, UX and legal teams,” so this coming to Pixel Buds is far from guaranteed at this point.

Source: Google turned ANC earbuds into heart rate sensor

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security

Air Canada Sues Website That Helps People Book More Flights simultaneously Calls own website team incompetent beyond belief

I am so frequently confused by companies that sue other companies for making their own sites and services more useful. It happens quite often. And quite often, the lawsuits are questionable CFAA claims against websites that scrape data to provide a better consumer experience, but one that still ultimately benefits the originating site.

Over the last few years various airlines have really been leading the way on this, with Southwest being particularly aggressive in suing companies that help people find Southwest flights to purchase. Unfortunately, many of these lawsuits are succeeding, to the point that a court has literally said that a travel company can’t tell others how much Southwest flights cost.

But the latest lawsuit of this nature doesn’t involve Southwest, and is quite possibly the dumbest one. Air Canada has sued the site Seats.aero that helps users figure out the best flights for their frequent flyer miles. Seats.aero is a small operation run by the company with the best name ever: Localhost, meaning that the lawsuit is technically “Air Canada v. Localhost” which sounds almost as dumb as this lawsuit is.

The Air Canada Group brings this action because Mr. Ian Carroll—through Defendant Localhost LLC—created a for-profit website and computer application (or “app”)— both called Seats.aero—that use substantial amounts of data unlawfully scraped from the Air Canada Group’s website and computer systems. In direct violation of the Air Canada Group’s web terms and conditions, Carroll uses automated digital robots (or “bots”) to continuously search for and harvest data from the Air Canada Group’s website and database. His intrusions are frequent and rapacious, causing multiple levels of harm, e.g., placing an immense strain on the Air Canada Group’s computer infrastructure, impairing the integrity and availability of the Air Canada Group’s data, soiling the customer experience with the Air Canada Group, interfering with the Air Canada Group’s business relations with its partners and customers, and diverting the Air Canada Group’s resources to repair the damage. Making matters worse, Carroll uses the Air Canada Group’s federally registered trademarks and logo to mislead people into believing that his site, app, and activities are connected with and/or approved by the real Air Canada Group and lending an air of legitimacy to his site and app. The Air Canada Group has tried to stop Carroll’s activities via a number of technological blocking measures. But each time, he employs subterfuge to fraudulently access and take the data—all the while boasting about his exploits and circumvention online.

Almost nothing in this makes any sense. Having third parties scrape sites for data about prices is… how the internet works. Whining about it is stupid beyond belief. And here, it’s doubly stupid, because anyone who finds a flight via seats.aero is then sent to Air Canada’s own website to book that flight. Air Canada is making money because Carroll’s company is helping people find Air Canada flights they can take.

Why are they mad?

Air Canada’s lawyers also seem technically incompetent. I mean, what the fuck is this?

Through screen scraping, Carroll extracts all of the data displayed on the website, including the text and images.

Carroll also employs the more intrusive API scraping to further feed Defendant’s website.

If the “API scraping” is “more intrusive” than screen scraping, you’re doing your APIs wrong. Is Air Canada saying that its tech team is so incompetent that its API puts more load on the site than scraping? Because, if so, Air Canada should fire its tech team. The whole point of an API is to make it easier for those accessing data from your website without needing to do the more cumbersome process of scraping.

And, yes, this lawsuit really calls into question Air Canada’s tech team and their ability to run a modern website. If your website can’t handle having its flights and prices scraped a few times every day, then you shouldn’t have a website. Get some modern technology, Air Canada:

Defendant’s avaricious data scraping generates frequent and myriad requests to the Air Canada Group’s database—far in excess of what the Air Canada Group’s infrastructure was designed to handle. Its scraping collects a large volume of data, including flight data within a wide date range and across extensive flight origins and destinations—multiple times per day.

Maybe… invest in better infrastructure like basically every other website that can handle some basic scraping? Or, set up your API so it doesn’t fall over when used for normal API things? Because this is embarrassing:

At times, Defendant’s voluminous requests have placed such immense burdens on the Air Canada Group’s infrastructure that it has caused “brownouts.” During a brownout, a website is unresponsive for a period of time because the capacity of requests exceeds the capacity the website was designed to accommodate. During brownouts caused by Defendant’s data scraping, legitimate customers are unable to use or the Air Canada + Aeroplan mobile app, including to search for available rewards, redeem Aeroplan points for the rewards, search for and view reward travel availability, book reward flights, contact Aeroplan customer support, and/or obtain service through the Aeroplan contact center due to the high volume of calls during brownouts.

Air Canada’s lawyers also seem wholly unfamiliar with the concept of nominative fair use for trademarks. If you’re displaying someone’s trademarks for the sake of accurately talking about them, there’s no likelihood of confusion and no concern about the source of the information. Air Canada claiming that this is trademark infringement is ridiculous:

I guarantee that no one using Seats.aero thinks that they’re on Air Canada’s website.

The whole thing is so stupid that it makes me never want to fly Air Canada again. I don’t trust an airline that can’t set up its website/API to handle someone making its flights more attractive to buyers.

But, of course, in these crazy times with the way the CFAA has been interpreted, there’s a decent chance Air Canada could win.

For its part, Carroll says that he and his lawyers have reached out to Air Canada “repeatedly” to try to work with them on how they “retrieve availability information,” and that “Air Canada has ignored these offers.” He also notes that tons of other websites are scraping the very same information, and he has no idea why he’s been singled out. He further notes that he’s always been open to adjusting the frequency of searches and working with the airlines to make sure that his activities don’t burden the website.

But, really, the whole thing is stupid. The only thing that Carroll’s website does is help people buy more flights. It points people to the Air Canada site to buy tickets. It makes people want to fly more on Air Canada.

Why would Air Canada want to stop that other than that it can’t admit that it’s website operations should all be replaced by a more competent team?

Source: Air Canada Would Rather Sue A Website That Helps People Book More Flights Than Hire Competent Web Engineers | Techdirt

New French AI Copyright Law Would Effectively Tax AI Companies, Enrich French taxman

This blog has written a number of times about the reaction of creators to generative AI. Legal academic and copyright expert Andres Guadamuz has spotted what may be the first attempt to draw up a new law to regulate generative AI. It comes from French politicians, who have developed something of a habit of bringing in new laws attempting to control digital technology that they rarely understand but definitely dislike.

There are only four articles in the text of the proposal, which are intended to be added as amendments to existing French laws. Despite being short, the proposal contains some impressively bad ideas. The first of these is found in Article 2, which, as Guadamuz summarises, “assigns ownership of the [AI-generated] work (now protected by copyright) to the authors or assignees of the works that enabled the creation of the said artificial work.” Here’s the huge problem with that idea:

How can one determine the author of the works that facilitated the conception of the AI-generated piece? While it might seem straightforward if AI works are viewed as collages or summaries of existing copyrighted works, this is far from the reality. As of now, I’m unaware of any method to extract specific text from ChatGPT or an image from Midjourney and enumerate all the works that contributed to its creation. That’s not how these models operate.

Since there is no way to find out exactly who the creators are whose work helped generate a new piece of AI material using aggregated statistics, Guadamuz suggests that the French lawmakers might want creators to be paid according to their contribution to the training material that went into creating the generative AI system itself. Using his own writings as an example, he calculates what fraction of any given payout he would receive with this approach. For ChatGPT’s output, Guadamuz estimates he might receive 0.00001% of any payout that was made. To give an example, even if the licensing fee for a some hugely popular work generated using AI were €1,000,000, Guadamuz would only receive 10 cents. Most real-life payouts to creators would be vanishingly small.

Article 3 of the French proposal builds on this ridiculous approach by requiring the names of all the creators who contributed to some AI-generated output to be included in that work. But as Guadamuz has already noted, there’s no way to find out exactly whose works have contributed to an output, leaving the only option to include the names of every single creator whose work is present in the training set – potentially millions of names.

Interestingly, Article 4 seems to recognize the payment problem raised above, and offers a way to deal with it. Guadamuz explains:

As it will be not possible to find the author of an AI work (which remember, has copyright and therefore isn’t in the public domain), the law will place a tax on the company that operates the service. So it’s sort of in the public domain, but it’s taxed, and the tax will be paid by OpenAI, Google, Midjourney, StabilityAI, etc. But also by any open source operator and other AI providers (Huggingface, etc). And the tax will be used to fund the collective societies in France… so unless people are willing to join these societies from abroad, they will get nothing, and these bodies will reap the rewards.

In other words, the net effect of the French proposal seems to be to tax the emerging AI giants (mostly US companies) and pay the money to French collecting societies. Guadumuz goes so far as to say: “in my view, this is the real intention of the legislation”. Anyone who thinks this is a good solution might want to read Chapter 7 of Walled Culture the book (free digital versions available), which quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. Trying to fit generative AI into the straitjacket of an outdated copyright system designed for books is clearly unwise; using it as a pretext for funneling yet more money away from creators and towards collecting societies is just ridiculous.

Source: New French AI Copyright Law Would Effectively Tax AI Companies, Enrich Collection Societies | Techdirt

Motorola’s concept slap bracelet smartphone looks convenient

Forget foldable phones, the next big trend could be gadgets that bend.

Lenovo, which is currently holding its ninth Tech World event in Austin, Texas, showed off its new collaboration with its subsidiary Motorola: a smartphone that can wrap around your wrist like a watch band.

It’s admittedly quite fascinating to see the tech in action. Lenovo calls its device the “Adaptive Display Concept”, which is comprised of a Full HD Plus resolution (2,228 x 1,080 pixels) pOLED screen that is able to “be bent and shaped into different” forms to meet the user’s needs. There’s no external hinge either as the prototype is a single-screen Android phone. The company explains bending it in half turns the 6.9-inch into one measuring 4.6 inches across. It can stand upright on the bent portion, in an arc, or wrap around a wrist as mentioned earlier.

Unfortunately, that’s all we know about the hardware itself. The Adaptive Display Concept did appear on stage at the Tech World 2023 where the presenter showed off its flexibility by placing it over her arm. Beyond that demonstration, though, both Lenovo and Motorola are keeping their lips sealed tight.

Source: Motorola’s concept ‘bracelet’ smartphone could be a clever final form for foldables | TechRadar

Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework

The Data Privacy Framework (DPF) presents new legal guidance to facilitate personal data sharing between US companies and their counterparts in the EU and the UK. This framework empowers individuals with greater control over their personal data and streamlines business operations by creating common rules around interoperable dataflows. Moreover, the DPF will help enable clear contract terms and business codes of conduct for corporations that collect, use, and transfer personal data across borders.

Any business that collects data related to people in the EU must comply with the EU’s General Data Protection Regulation (GDPR), which is the toughest privacy and security law across the globe. Thus, the DPF helps US corporations avoid potentially hefty fines and penalties by ensuring their data transfers align with GDPR regulations.

Data transfer procedures, which were historically time-consuming and riddled with legal complications, are now faster and more straightforward with the DPF, which allows for more transatlantic dataflows agreed on by US companies and their EU and UK counterparts. On July 10, 2023, the European Commission finalized an adequacy decision that assures the US offers data protection levels similar to the EU’s.

[…]

US companies can register with the DPF through the Department of Commerce DPF website. Companies that previously self-certified compliance with the EU-US Privacy Shield can transition to DPF by recertifying their adherence to DPF principles, including updating privacy policies to reflect any change in procedures and data subject rights that are crucial for this transition. Businesses should develop privacy policies that identify an independent recourse mechanism that can address data protection concerns. To qualify for the DPF the company must fall under the jurisdiction of either the Federal Trade Commission or the US Department of Transportation, though this reach may broaden in the future.

Source: Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework | American Enterprise Institute – AEI

The whole self-certification things seems leaky as a sieve to me… And once data has gone into the US intelligence services you can assume it will go everywhere and there will be no stopping it from the EU side.

Citrix urges “immediate” patching as exploit POC

Citrix has urged admins to “immediately” apply a fix for CVE-2023-4966, a critical information disclosure bug that affects NetScaler ADC and NetScaler Gateway, admitting it has been exploited.

Plus, there’s a proof-of-concept exploit, dubbed Citrix Bleed, now on GitHub. So if you are using an affected build, at this point assume you’ve been compromised, apply the update, and then kill all active sessions per Citrix’s advice from Monday.

The company’s first issued a patch for compromised devices on October 10, and last week Mandiant warned that criminals — most likely cyberspies — have been abusing this hole to hijack authentication sessions and steal corporate info since at least late August.

[…]

Also last week, Mandiant Consulting CTO Charles Carmakal warned that “organizations need to do more than just apply the patch — they should also terminate all active sessions. These authenticated sessions will persist after the update to mitigate CVE-2023-4966 has been deployed.”

Citrix, in the Monday blog, also echoed this mitigation advice and told customers to kill all active and persistent sessions using the following commands:

kill icaconnection -all

kill rdp connection -all

kill pcoipConnection -all

kill aaa session -all

clear lb persistentSessions

[…]

Source: Citrix urges “immediate” patching as exploit POC • The Register

Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World

Here’s how things went for the world’s most infamous purveyor of facial recognition tech when it came to its dealings with the United Kingdom. In a word: not great.

In addition to supplying its scraped data to known human rights abusers, Clearview was found to have supplied access to a multitude of UK and US entities. At that point (early 2020), it was also making its software available to a number of retailers, suggesting the tool its CEO claimed was instrumental in fighting serious crime (CSAM, terrorism) was just as great at fighting retail theft. For some reason, an anti-human-trafficking charity headed up by author J.K. Rowling was also on the customer list obtained by Buzzfeed.

Clearview’s relationship with the UK government soon soured. In December 2021, the UK government’s Information Commissioner’s Office (ICO) said the company had violated UK privacy laws with its non-consensual scraping of UK residents’ photos and data. That initial declaration from the ICO came with a $23 million fine attached, one that was reduced to a little less than $10 million ($9.4 million) roughly six months later, accompanied by demands Clearview immediately delete all UK resident data in its possession.

This fine was one of several the company managed to obtain from foreign governments. The Italian government — citing EU privacy law violations — levied a $21 million fine. The French government came to the same conclusions and the same penalty, adding another $21 million to Clearview’s European tab.

The facial recognition tech company never bothered to proclaim its innocence after being fined by the UK government. Instead, it simply stated the UK government had no power to enforce this fine because Clearview was a United States company with no physical presence in the United Kingdom.

In addition to engaging in reputational rehab on the UK front, Clearview went to court to challenge the fine levied by the UK government. And it appears to have won this round for the moment, reducing its accounts payable ledger by about $10 million, as Natasha Lomas reports for TechCrunch.

[I]n a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.

Which is pretty much the argument Clearview made months ago, albeit less elegantly after it was first informed of the fine. The base argument is that Clearview is a US entity providing services to foreign entities and that it’s up to its foreign customers to comply with local laws, rather than Clearview itself.

That argument worked. And it worked because it appears the ICO chose the wrong law to wield against Clearview. The UK’s GDPR does not protect UK residents from actions taken by “competent authorities for law enforcement purposes.” (lol at that entire phrase.) Government customers of Clearview are only subject to the adopted parts of the EU’s Data Protection Act post-Brexit, which means the company’s (alleged) pivot to the public sector puts both its actions — and the actions of its UK law enforcement clients — outside of the reach of the GDPR.

Per the ruling, Clearview argued it’s a foreign company providing its service to “foreign clients, using foreign IP addresses, and in support of the public interest activities of foreign governments and government agencies, in particular in relation to their national security and criminal law enforcement functions.”

That’s enough to get Clearview off the hook. While the GDPR and EU privacy laws have extraterritorial provisions, they also make exceptions for law enforcement and national security interests. GDPR has more exceptions, which made it that much easier for Clearview to walk away from this penalty by claiming it only sold to entities subject to this exception.

Whether or not that’s actually true has yet to be determined. And it might have made more sense for ICO to prosecute this under the parts of EU law the UK government decided to adopt after deciding it no longer wanted to be part of this particular union.

Even if the charges had stuck, it’s unlikely Clearview would ever have paid the fine. According to its CEO and spokespeople, Clearview owes nothing to anyone. Whatever anyone posts publicly is fair game. And if the company wants to hoover up everything on the web that isn’t nailed down, well, that’s a problem for other people to be subjected to, possibly at gunpoint. Until someone can actually make something stick, all they’ve got is bills they can’t collect and a collective GFY from one of the least ethical companies to ever get into the facial recognition business.

Source: Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World | Techdirt

Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals

It’s pretty much the way of the world: beyond the basic enshittification story that has been so well told over the past year or so about how companies get worse and worse as they get more and more powerful, there’s also the well known concept of successful innovative companies “pulling up the ladder” behind them, using the regulatory process to make it impossible for other companies to follow their own path to success. We’ve talked about this in the sense of political entrepreneurship, which is when the main entrepreneurial effort is not to innovate in newer and better products for customers, but rather using the political system for personal gain and to prevent competitors from havng the same opportunities.

It happens all too frequently. And it’s been happening lately with the big internet companies, which relied on the open internet to become successful, but under massive pressure from regulators (and the media), keep shooting the open internet in the back, each time they can present themselves as “supportive” of some dumb regulatory regime. Facebook did it six years ago by supporting FOSTA wholeheartedly, which was the key tide shift that made the law viable in Congress.

And, now, it appears that Google is going down that same path. There have been hints here and there, such as when it mostly gave up the fight on net neutrality six years ago. However, Google had still appeared to be active in various fights to protect an open internet.

But, last week, Google took a big step towards pulling up the open internet ladder behind it, which got almost no coverage (and what coverage it got was misleading). And, for the life of me, I don’t understand why it chose to do this now. It’s one of the dumbest policy moves I’ve seen Google make in ages, and seems like a complete unforced error.

Last Monday, Google announced “a policy framework to protect children and teens online,” which was echoed by subsidiary YouTube, which posted basically the same thing, talking about it’s “principled approach for children and teenagers.” Both of these pushed not just a “principled approach” for companies to take, but a legislative model (and I hear that they’re out pushing “model bills” across legislatures as well).

The “legislative” model is, effectively, California’s Age Appropriate Design Code. Yes, the very law that was just declared unconstitutional just a few weeks before Google basically threw its weight behind the approach. What’s funny is that many, many people have (incorrectly) believed that Google was some sort of legal mastermind behind the NetChoice lawsuits challenging California’s law and other similar laws, when the reality appears to be that Google knows full well that it can handle the requirements of the law, but smaller competitors cannot. Google likes the law. It wants more of them, apparently.

The model includes “age assurance” (which is effectively age verification, though everyone pretends it’s not), greater parental surveillance, and the compliance nightmare of “impact assessments” (we talked about this nonsense in relation to the California law). Again, for many companies this is a good idea. But just because something is a good idea for companies to do does not mean that it should be mandated by law.

But that’s exactly what Google is pushing for here, even as a law that more or less mimics its framework was just found to be unconstitutional. While cynical people will say that maybe Google is supporting these policies hoping that they will continue to be found unconstitutional, I see little evidence to support that. Instead, it really sounds like Google is fully onboard with these kinds of duty of care regulations that will harm smaller competitors, but which Google can handle just fine.

It’s pulling up the ladder behind it.

And yet, the press coverage of this focused on the fact that this was being presented as an “alternative” to a full on ban for kids under 18 to be on social media. The Verge framed this as “Google asks Congress not to ban teens from social media,” leaving out that it was Google asking Congress to basically make it impossible for any site other than the largest, richest companies to be able to allow teens on social media. Same thing with TechCrunch, which framed it as Google lobbying against age verification.

But… it’s not? It’s basically lobbying for age verification, just in the guise of “age assurance,” which is effectively “age verification, but if you’re a smaller company you can get it wrong some undefined amount of time, until someone sues you.” I mean, what’s here is not “lobbying against age verification,” it’s basically saying “here’s how to require age verification.”

A good understanding of user age can help online services offer age-appropriate experiences. That said, any method to determine the age of users across services comes with tradeoffs, such as intruding on privacy interests, requiring more data collection and use, or restricting adult users’ access to important information and services. Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

Much like Facebook caving on FOSTA, this is Google caving on age verification and other “duty of care” approaches to regulating the way kids have access to the internet. It’s pulling up the ladder behind itself, knowing that it was able to grow without having to take these steps, and making sure that none of the up-and-coming challenges to Google’s position will have the same freedom to do so.

And, for what? So that Google can go to regulators and say “look, we’re not against regulations, here’s our framework”? But Google has smart policy people. They have to know how this plays out in reality. Just as with FOSTA, it completely backfired on Facebook (and the open internet). This approach will do the same.

Not only will these laws inevitably be used against the companies themselves, they’ll also be weaponized and modified by policymakers who will make them even worse and even more dangerous, all while pointing to Google’s “blessing” of this approach as an endorsement.

For years, Google had been somewhat unique in continuing to fight for the open internet long after many other companies were switching over to ladder pulling. There were hints that Google was going down this path in the past, but with this policy framework, the company has now made it clear that it has no intention of being a friend to the open internet any more.

Source: Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals | Techdirt

Well, with chrome only support, dns over https and browser privacy sandboxing, Google has been off the do no evil for some time and has been closing off the openness of the web by rebuilding or crushing competition for quite some time

Microsoft admits ‘power issue’ downed Azure in West Europe

Microsoft techies are trying to recover storage nodes for a “small” number of customers following a “power issue” on October 20 that triggered Azure service disruptions and ruined breakfast for those wanting to use hosted virtual machines or SQL DB.

The degradation began at 0731 UTC on Friday when Microsoft spotted the unspecified power problem, which affected infrastructure in one Availability Zone in the West Europe region. As such, businesses using VMs, Storage, App Service, or Cosmos and SQL DB suffered interruptions.

So what caused this unplanned downtime session? Microsoft says in an incident report on its Azure status history page: “Due to an upstream utility disturbance, we moved to generator power for a section of one datacenter at approximately 0731 UTC. A subset of those generators supporting that section failed to take over as expected during the switch over from utility power, resulting in the impact.”

Engineers managed to restore power again at around 0800 UTC and the impacted infrastructure began to clamber back online again. When the networking and storage plumbing recovered, compute scale units were brought into service, and for the “vast majority” the Azure services were accessible again from 0915 UTC.

Yet not everyone was up and running smoothly, Microsoft admitted.

“A small amount of storage nodes needs to be recovered manually, leading to delays in recovery for some services and customers. We are working to recover these nodes and will continue to communicate to these impacted customers directly via the Service Health blade in the Azure Portal.”

Source: Microsoft admits ‘power issue’ downed Azure in West Europe • The Register

Scientists create world’s most water-resistant surface

[…]

A research team in Finland, led by Robin Ras, from Aalto University, and aided by researchers from the University of Jyväskylä, has developed a mechanism to make water droplets slip off surfaces with unprecedented efficacy.

Cooking, transportation, optics and hundreds of other technologies are affected by how water sticks to surfaces or slides off them, and adoption of water-resistant surfaces in the future could improve many household and industrial technologies, such as plumbing, shipping and the auto industry.

The research team created solid silicon surfaces with a “liquid-like” outer layer that repels water by making droplets slide off surfaces. The highly mobile topcoat acts as a lubricant between the product and the water droplets.

The discovery challenges existing ideas about friction between solid surfaces and water, opening a new avenue for studying slipperiness at the molecular level.

Sakari Lepikko, the lead author of the study, which was published in Nature Chemistry on Monday, said: “Our work is the first time that anyone has gone directly to the nanometer-level to create molecularly heterogeneous surfaces.”

By carefully adjusting conditions, such as temperature and water content, inside a reactor, the team could fine-tune how much of the silicon surface the monolayer covered.

Ras said: “I find it very exciting that by integrating the reactor with an ellipsometer, that we can watch the self-assembled monolayers grow with extraordinary level of detail.

“The results showed more slipperiness when SAM [self-assembled monolayer] coverage was low or high, which are also the situations when the surface is most homogeneous. At low coverage, the silicon surface is the most prevalent component, and at high, SAMs are the most prevalent.”

Lepikko added: “It was counterintuitive that even low coverage yielded exceptional slipperiness.”

Using the new method, the team ended up creating the slipperiest liquid surface in the world.

According to Lepikko, the discovery promises to have implications wherever droplet-repellent surfaces are needed. This covers hundreds of examples from daily life to industrial environments.

[…]

“The main issue with a SAM coating is that it’s very thin, and so it disperses easily after physical contact. But studying them gives us fundamental scientific knowledge which we can use to create durable practical applications,” Lepikko said.

[…]

Source: Scientists create world’s most water-resistant surface | Materials science | The Guardian

AI and smart mouthguards: the new frontline in fight against brain injuries

There was a hidden spectator of the NFL match between the Baltimore Ravens and Tennessee Titans in London on Sunday: artificial intelligence. As crazy as it may sound, computers have now been taught to identify on-field head impacts in the NFL automatically, using multiple video angles and machine learning. So a process that would take 12 hours – for each game – is now done in minutes. The result? After every weekend, teams are sent a breakdown of which players got hit, and how often.

This tech wizardry, naturally, has a deeper purpose. Over breakfast the NFL’s chief medical officer, Allen Sills, explained how it was helping to reduce head impacts, and drive equipment innovation.

Players who experience high numbers can, for instance, be taught better techniques. Meanwhile, nine NFL quarterbacks and 17 offensive linemen are wearing position-specific helmets, which have significantly more padding in the areas where they experience more impacts.

What may be next? Getting accurate sensors in helmets, so the force of each tackle can also be estimated, is one area of interest. As is using biomarkers, such as saliva and blood, to better understand when to bring injured players back to action.

If that’s not impressive enough, this weekend rugby union became the first sport to adopt smart mouthguard technology, which flags big “hits” in real time. From January, whenever an elite player experiences an impact in a tackle or ruck that exceeds a certain threshold, they will automatically be taken off for a head injury assessment by a doctor.

No wonder Dr Eanna Falvey, World Rugby’s chief medical officer, calls it a “gamechanger” in potentially identifying many of the 18% of concussions that now come to light only after a match.

[…]

As things stand, World Rugby is adding the G-force and rotational acceleration of a hit to determine when to automatically take a player off for an HIA. Over the next couple of years, it wants to improve its ability to identify the impacts with clinical meaning – which will also mean looking at other factors, such as the duration and direction of the impact, as well.

[…]

Then there is the ability to use the smart mouthguard to track load over time. “It’s one thing to assist to identify concussions,” he says. “It’s another entirely to say it’s going to allow coaches and players to track exactly how many significant head impacts they have in a career – especially with all the focus on long-term health risks. If they can manage that load, particularly in training, that has performance and welfare benefits.”

[…]

Source: AI and smart mouthguards: the new frontline in fight against brain injuries | Sport | The Guardian

Spacecraft re-entry filling the atmosphere with metal vapor – and there will be more of it coming in

A group of scientists studying the effects of rocket and satellite reentry vaporization in Earth’s atmosphere have found some startling evidence that could point to disastrous environmental effects on the horizon.

The study, published in the Proceedings of the National Academy of Sciences, found that around 10 percent of large (>120 nm) sulfuric acid particles in the stratosphere contain aluminum and other elements consistent with the makeup of alloys used in spacecraft construction, including lithium, copper and lead. The other 90 percent comes from “meteoric smoke,” which are the particles left over when meteors vaporize during atmospheric entry, and that naturally-occurring share is expected to plummet drastically.

“The space industry has entered an era of rapid growth,” the boffins said in their paper, “with tens of thousands of small satellites planned for low earth orbit.

“It is likely that in the next few decades, the percentage of stratospheric sulfuric acid particles that contain aluminum and other metals from satellite reentry will be comparable to the roughly 50 percent that now contain meteoric metals,” the team concluded.

Atmospheric circulation at those altitudes (beginning somewhere between four and 12 miles above ground level and extending up to 31 miles above Earth) means such particles are unlikely to have an effect on the surface environment or human health, the researchers opined.

Stratospheric changes might be even scarier, though

Earth’s stratosphere has classically been considered pristine, said Dan Cziczo, one of the study’s authors and head of Purdue University’s department of Earth, atmospheric and planetary studies. “If something is changing in the stratosphere – this stable region of the atmosphere – that deserves a closer look.”

One of the major features of the stratosphere is the ozone layer, which protects Earth and its varied inhabitants from harmful UV radiation. It’s been harmed by human activity before action was taken, and an increase in aerosolized spacecraft particles could have several consequences to our planet.

One possibility is effects on the nucleation of ice and nitric acid trihydrate, which form in stratospheric clouds over Earth’s polar regions where currents in the mesosphere (the layer above the stratosphere) tend to deposit both meteoric and spacecraft aerosols.

Ice formed in the stratosphere doesn’t necessarily reach the ground, and is more likely to have effects on polar stratospheric clouds, lead author and National Oceanic and Atmospheric Administration scientists Daniel Murphy told The Register.

“Polar stratospheric clouds are involved in the chemistry of the ozone hole,” Murphy said. However, “it is too early to know if there is any impact on ozone chemistry,” he added

Along with changes in atmospheric ice formation and the ozone layer, the team said that more aerosols from vaporized spacecraft could change the stratospheric aerosol layer, something that scientists have proposed seeding in order to block more UV rays to fight the effects of global warming.

The materials being injected from spacecraft reentry is much smaller than amounts scientists have considered for intentional injection, Murphy told us. However, “intentional injection of exotic materials into the stratosphere could raise many of the same questions [as the paper] on an even bigger scale,” he noted.

[…]

Source: Spacecraft re-entry filling the atmosphere with metal vapor • The Register