‘Super Melanin’ Speeds Healing, Stops Sunburn, and More

A team of scientists at Northwestern University has developed a synthetic version of melanin that could have a million and one uses. In new research, they showed that their melanin can prevent blistering and accelerate the healing process in tissue samples of freshly injured human skin. The team now plans to further develop their “super melanin” as both a medical treatment for certain skin injuries and as a potential sunscreen and anti-aging skincare product.

[…] Most people might recognize melanin as the main driver of our skin color, or as the reason why some people will tan when exposed to the sun’s harmful UV rays. But it’s a substance with many different functions across the animal kingdom. It’s the primary ingredient in the ink produced by squids; it’s used by certain microbes to evade a host’s immune system; and it helps create the iridescence of some butterflies. A version of melanin produced by our brain cells might even protect us from neurodegenerative conditions like Parkinson’s.

[…]

Their latest work was published Thursday in the Nature Journal npj Regenerative Medicine. In the study, they tested the melanin on both mice and donated human skin tissue samples that had been exposed to potentially harmful things (the skin samples were exposed to toxic chemicals, while the mice were exposed to chemicals and UV radiation). In both scenarios, the melanin reduced or even entirely prevented the damage to the top and underlying layers of skin that would have been expected. It seemed to do this mainly by vacuuming up the damaging free radicals generated in the skin by these exposures, which in turn reduced inflammation and generally sped up the healing process.

The team’s creation very closely resembles natural melanin, to the extent that it seems to be just as biodegradable and nontoxic to the skin as the latter (in experiments so far, it doesn’t appear to be absorbed into the body when applied topically, further reducing any potential safety risks). But the ability to apply as much of their melanin as needed means that it could help repair skin damage that might otherwise overwhelm our body’s natural supply. And their version has been tweaked to be more effective at its job than usual.

[…]

It could have military applications—one line of research is testing whether the melanin can be used as a protective dye in clothing that would absorb nerve gas and other environmental toxins.

[…]

On the clinical side, they’re planning to develop the synthetic melanin as a treatment for radiation burns and other skin injuries. And on the cosmetic side, they’d like to develop it as an ingredient for sunscreens and anti-aging skincare products.

[…]

all of those important mechanisms we’re seeing [from the clinical research] are the same things that you look for in an ideal profile of an anti-aging cream, if you will, or a cream that tries to repair the skin.”

[…]

Source: ‘Super Melanin’ Speeds Healing, Stops Sunburn, and More

World’s First Commercial Spaceplane Faces Crucial Test at NASA

Dream Chaser, built by Sierra Space, is being prepped for transport to a NASA facility in Ohio, where it will undergo a series of tests to make sure the spaceplane can survive its heated reentry through Earth’s atmosphere. Starting these tests is crucial, demonstrating Dream Chaser’s readiness for flights and potentially transforming commercial space travel.

Sierra Space is hoping to see its spaceplane fly to the International Space Station (ISS) in 2024 as part of a contract with NASA. The first commercial spaceplane is currently at the company’s facility in Louisville, Colorado, and will soon make the roughly 60 mile (96 kilometer) journey to the Neil Armstrong Test Facility in Sandusky, Ohio, local media outlet Denver 7 reported.

The Colorado-based company was awarded a NASA Commercial Resupply Services 2 (CRS-2) contract in 2016, under which it will provide at least seven uncrewed missions to deliver cargo to and from the ISS. Sierra Space is targeting 2024 for the inaugural flight of the first model of the Dream Chaser fleet spacecraft, named Tenacity, from the Kennedy Space Center in Florida.

[…]

Dream Chaser is designed to fly to low Earth orbit, carrying cargo and passengers on a smooth ride to pitstops such as the ISS. The spaceplane will launch from Earth atop a rocket, and is designed to survive atmospheric reentry and perform runway landings on the surface upon its return. Sierra Space’s Dream Chaser is designed with foldable wings that fully unfurl once the spaceplane is in flight, generating power through solar arrays. The spaceplane is also equipped with heat shield tiles to protect it from the high temperatures of atmospheric reentry.

Unlike Virgin Galactic’s suborbital spaceplane, Sierra Space designed Dream Chaser to reach orbit and stay there for six months. The U.S. Space Force has its own spaceplane, which wrapped up a mysterious two-and-a-half-year mission in low Earth orbit in November 2022.

[…]

For its debut flight, Tenacity will ride atop United Launch Alliance’s Vulcan Centaur rocket. The spaceplane is scheduled for the rocket’s second mission, although Vulcan is yet to fly for the first time due to several delays. The spaceplane is tentatively slated for an April launch, but that still depends on the rocket’s first test flight.

In the future, Sierra Space also wants to launch crewed Dream Chaser missions to its own space station, as opposed to the Orbital Reef space station, which it is designing in collaboration with Jeff Bezos’ Blue Origin—a relationship that appears to be in doubt.

Source: World’s First Commercial Spaceplane Faces Crucial Test at NASA

Brave rivals Bing and ChatGPT with new privacy-focused AI chatbot

Brave, the privacy-focused browser that automatically blocks unwanted ads and trackers, is rolling out Leo — a native AI assistant that the company claims provides “unparalleled privacy” compared to some other AI chatbot services. Following several months of testing, Leo is now available to use for free by all Brave desktop users running version 1.60 of the web browser. Leo is rolling out “in phases over the next few days” and will be available on Android and iOS “in the coming months.”

The core features of Leo aren’t too dissimilar from other AI chatbots like Bing Chat and Google Bard: it can translate, answer questions, summarize webpages, and generate new content. Brave says the benefits of Leo over those offerings are that it aligns with the company’s focus on privacy — conversations with the chatbot are not recorded or used to train AI models, and no login information is required to use it. As with other AI chatbots, however, Brave claims Leo’s outputs should be “treated with care for potential inaccuracies or errors.”

[…]

Source: Brave rivals Bing and ChatGPT with new privacy-focused AI chatbot – The Verge

Latest Baldur’s Gate 3 Patch Nerfs Sex Speedruns because… Americans?

For being a role-playing game based on 5e Dungeons & Dragons, Baldur’s Gate 3 is notoriously horny. Regardless of mythical race, gender, or social station, many of the game’s alluring party members are willing to at least spank you, and because of this, BG3 has a thriving and official sex speedrun category. For a time, there was little stopping you from watching a reality-bending interspecies cutscene within minutes of creating your custom character. But after developer Larian Studios issued its massive Patch #4 on November 2, Sex% speedruns are in jeopardy.

Githyanki warrior Lae’zel has so far been the premier choice for Sex%. Up until now, her requirements for getting naked were pretty low—speedrunners, like Mae, who currently holds the world record at one minute and 58 seconds to fuck, just needed to jack up her approval rating and seal the deal. But Patch #4 makes Lae’zel more selective with her partners.

“For Lae’zel to decide to romance you, you no longer only need to gain high enough approval from her,” Larian’s patch notes say. “You must also have proven yourself worthy through your actions.”

“Whereas bullying a tiefling used to be enough to get Lae’zel down horrendously for us,” Mae told me over email, “she now has new criteria that’s seemingly based on quest progression. We’re not entirely sure what all of the different ways we can fulfill that criteria are yet, but we’ve so far confirmed that resolving the druid grove questline in addition to the previous relationship requirements seems to do it.”

[…]

Source: Latest Baldur’s Gate 3 Patch Nerfs Sex Speedruns

YouTube’s Crackdown Spurs Record Uninstalls And Reinstalls in new Browser of Ad Blockers… Time to Change Video Site?

[…] Previously unreported figures from ad blocking companies indicate that YouTube’s crackdown is working, with hundreds of thousands of people uninstalling ad blockers in October. The available data suggests that last month saw a record number of ad blockers uninstalled—and also a record for new ad blocker installs as people sought alternatives that wouldn’t trigger YouTube’s dreaded pop-up.

[…]

Munich-based Ghostery experienced three to five times the typical daily number of both uninstalls and installs throughout much of October, Modras says, leaving usage about flat. Over 90 percent of users who completed a survey about their reason for uninstalling cited the tool failing on YouTube. So intent were users on finding a workable blocker that many appear to have tried Microsoft’s Edge, a web browser whose market share pales beside Chrome’s. Ghostery installations on Edge surged 30 percent last month compared to September. Microsoft declined to comment.

Screenshot of ad blocker notice on YouTube

YouTube uses escalating pop-up messages to demand that users stop using an ad blocker, eventually threatening to cut off access to videos.

Google via WIRED Staff

AdGuard, which says it has about 75 million users of its ad blocking tools including 4.5 million people who pay for them, normally sees around 6,000 uninstallations per day for its Chrome extension. From October 9 until the end of the month, those topped 11,000 per day, spiking to about 52,000 on October 18, says CTO Andrey Meshkov.

User complaints started flooding in at the 120-person, Cyprus-based company, about four every hour, at least half of them about YouTube. But as at Ghostery, installations also surged as others looked for relief, reaching about 60,000 installations on Chrome on October 18 and 27. Subscribers grew as people realized AdGuard’s paid tools remained unaffected by YouTube’s clampdown.

Another extension, AdLock, recorded about 30 percent more daily installations and uninstallations in October than in previous months, according to its product head.

[…]

Ad blocking executives say that user reports suggest YouTube’s attack on ad blockers has coincided with tests to increase the number of ads it shows. YouTube sold over $22 billion in ads through the first nine months of this year, up about 5 percent from the same period last year, accounting for about 10 percent of Google’s overall sales.

[…]

YouTube’s test has affected users accessing the website through Chrome on laptops and desktops, according to ad block developers. It doesn’t affect people using YouTube’s mobile or TV apps, using YouTube’s mobile site, or watching YouTube videos embedded on other sites. YouTube’s Lawton says warnings appear regardless of whether users are logged in to the service or using Incognito mode.

Further, the warnings seem to be triggered when YouTube detects certain open source filtering rules that many ad blockers use to identify ads, rather than by targeting any specific extensions, Ghostery’s Modras says. The technology deployed by YouTube mirrors code Google developed in 2017 for a program it calls Funding Choices that enables news and other websites to detect ad blockers, he adds.

The ad sleuths who figure out ways to detect ads and engineers skilled at blocking them are working hard to figure out how to evade YouTube’s blocker blockade, in private Slack groups and discussion on GitHub projects. But progress has been hampered because YouTube isn’t ensnaring every user in its dragnet. Relatively few of the developers have been able to trigger the warning themselves—perhaps the world’s only ad block users who cheer when YouTube finally catches them.

[…]

Some ad blockers are already adapting. Hankuper, the Slovakian company behind lesser known blocker AdLock, released a new version for Windows this week that it believes goes unnoticed by YouTube. If users find that to be true, it will push the fix to versions for macOS, Android, and iOS, says Kostiantyn Shebanov, Hankuper’s product head and business development manager.

Ghostery’s Modras worries about the consequences of Google escalating the war on blockers. Users losing anti-tracking features as they disable the tools could fall prey to online hazards, and the more complex blocking tactics companies like his are being forced to introduce could lead to unintended security holes. “The more powerful they have to become to deal with challenges, the more risk is involved,” he says.

There could also be legal repercussions. Modras says that when a publisher takes steps to thwart an adblocker, it’s illegal for developers to try to circumvent those measures in Europe. But he believes it is permissible to block ads if a blocker does so before triggering a warning.

[…]

Source: YouTube’s Crackdown Spurs Record Uninstalls of Ad Blockers | WIRED

It doesn’t help much that Google is essentially deploying spyware to figure out which browsers to block. And it’s apparently very very targetted spyware too.

Source: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Note: uBlock Origin extension works to block ads. It’s a browser extension you should be using anyway. You can also install a browser like Brave or Firefox (whichever one you are not using at the moment) and use that to only watch YouTube on. Brave will help block a lot of ads.

EU Parliament Fails To Understand That The Right To Read Is The Right To Train. Understands the copyright lobby has money though.

Walled Culture recently wrote about an unrealistic French legislative proposal that would require the listing of all the authors of material used for training generative AI systems. Unfortunately, the European Parliament has inserted a similarly impossible idea in its text for the upcoming Artificial Intelligence (AI) Act. The DisCo blog explains that MEPs added new copyright requirements to the Commission’s original proposal:

These requirements would oblige AI developers to disclose a summary of all copyrighted material used to train their AI systems. Burdensome and impractical are the right words to describe the proposed rules.

In some cases it would basically come down to providing a summary of half the internet.

Leaving aside the impossibly large volume of material that might need to be summarized, another issue is that it is by no means clear when something is under copyright, making compliance even more infeasible. In any case, as the DisCo post rightly points out, the EU Copyright Directive already provides a legal framework that addresses the issue of training AI systems:

The existing European copyright rules are very simple: developers can copy and analyse vast quantities of data from the internet, as long as the data is publicly available and rights holders do not object to this kind of use. So, rights holders already have the power to decide whether AI developers can use their content or not.

This is a classic case of the copyright industry always wanting more, no matter how much it gets. When the EU Copyright Directive was under discussion, many argued that an EU-wide copyright exception for text and data mining (TDM) and AI in the form of machine learning would be hugely beneficial for the economy and society. But as usual, the copyright world insisted on its right to double dip, and to be paid again if copyright materials were used for mining or machine learning, even if a license had already been obtained to access the material.

As I wrote in a column five years ago, that’s ridiculous, because the right to read is the right to mine. Updated for our AI world, that can be rephrased as “the right to read is the right to train”. By failing to recognize that, the European Parliament has sabotaged its own AI Act. Its amendment to the text will make it far harder for AI companies to thrive in the EU, which will inevitably encourage them to set up shop elsewhere.

If the final text of the AI Act still has this requirement to provide a summary of all copyright material that is used for training, I predict that the EU will become a backwater for AI. That would be a huge loss for the region, because generative AI is widely expected to be one of the most dynamic and important new tech sectors. If that happens, backward-looking copyright dogma will once again have throttled a promising digital future, just as it has done so often in the recent past.

Source: EU Parliament Fails To Understand That The Right To Read Is The Right To Train | Techdirt

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

The EU Commission has been pushing client-side scanning for well over a year. This new intrusion into private communications has been pitched as perhaps the only way to prevent the sharing of child sexual abuse material (CSAM).

Mandates proposed by the EU government would have forced communication services to engage in client-side scanning of content. This would apply to every communication or service provider. But it would only negatively affect providers incapable of snooping on private communications because their services are encrypted.

Encryption — especially end-to-end encryption — protects the privacy and security of users. The EU’s pitch said protecting more than the children was paramount, even if it meant sacrificing the privacy and security of millions of EU residents.

Encrypted services would have been unable to comply with the mandate without stripping the client-side end from their end-to-end encryption. So, while it may have been referred to with the legislative euphemism “chat control” by EU lawmakers, the reality of the situation was that this bill — if passed intact — basically would have outlawed E2EE.

Fortunately, there was a lot of pushback. Some of it came from service providers who informed the EU they would no longer offer their services in EU member countries if they were required to undermine the security they provided for their users.

The more unexpected resistance came from EU member countries who similarly saw the gaping security hole this law would create and wanted nothing to do with it. On top of that, the EU government’s own lawyers told the Commission passing this law would mean violating other laws passed by this same governing body.

This pushback was greeted by increasingly nonsensical assertions by the bill’s supporters. In op-eds and public statements, backers insisted everyone else was wrong and/or didn’t care enough about the well-being of children to subject every user of any communication service to additional government surveillance.

That’s what happened on the front end of this push to create a client-side scanning mandate. On the back end, however, the EU government was trying to dupe people into supporting their own surveillance with misleading ads that targeted people most likely to believe any sacrifice of their own was worth making when children were on the (proverbial) line.

That’s the unsettling news being delivered to us by Vas Panagiotopoulos for Wired. A security researcher based in Amsterdam took a long look at apparently misleading ads that began appearing on Twitter as the EU government amped up its push to outlaw encryption.

Danny Mekić was digging into the EU’s “chat control” law when he began seeing disturbing ads on Twitter. These ads featured young women being (apparently) menaced by sinister men, backed by a similarly dark background and soundtrack. The ads displayed some supposed “facts” about the sexual abuse of children and ended with the notice that the ads had been paid for by the EU Commission.

The ads also cited survey results that supposedly said most European citizens supported client-side scanning of content and communications, apparently willing to sacrifice their own privacy and security for the common good.

But Mekić dug deeper and discovered the cited survey wasn’t on the level.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

This discovery prompted Mekić to dig even deeper. What Mekić found was that the ads were very tightly targeted — so tightly targeted, in fact, that they could not have been deployed in this manner without violating European laws that are aimed to prevent exactly this sort of targeting, i.e. by using “sensitive data” like religious beliefs and political affiliations.

The ads were extremely targeted, meant to find people most likely to be swayed towards the EU Commission’s side, either because the targets never appeared to distrust their respective governments or because their governments had yet to tell the EU Commission to drop its proposed anti-encryption proposal.

Mekić found that the ads were meant to be seen by select targets, such as top ministry officials, while they were concealed from people interested in Julian Assange, Brexit, EU corruption, Eurosceptic politicians (Marine Le Pen, Nigel Farage, Viktor Orban, Giorgia Meloni), the German right-wing populist party AfD, and “anti-Christians.”

Mekić then found out that the ads, which have garnered at least 4 million views, were only displayed in seven EU countries: the Netherlands, Sweden, Belgium, Finland, Slovenia, Portugal, and the Czech Republic.

A document leaked earlier this year exposed which EU members were in favor of client-side scanning and its attendant encryption backdoors, as well as those who thought the proposed mandate was completely untenable.

The countries targeted by the EU Commission ad campaign are, for the most part, supportive of/indifferent to broken encryption, client-side scanning, and expanded surveillance powers. Slovenia (along with Spain, Cyprus, Lithuania, Croatia, and Hungary) were all firmly in favor of bringing an end to end-to-end encryption.

[…]

While we’re accustomed to politicians airing misleading ads during election runs, this is something different. This is the representative government of several nations deliberately targeting countries and residents it apparently thinks might be receptive to its skewed version of the facts, which comes in the form of the presentation of misleading survey results against a backdrop of heavily-implied menace. And that’s on top of seeming violations of privacy laws regarding targeted ads that this same government body created and ratified.

It’s a tacit admission EU proposal backers think they can’t win this thing on its merits. And they can’t. The EU Commission has finally ditched its anti-encryption mandates after months of backlash. For the moment, E2EE survives in Europe. But it’s definitely still under fire. The next exploitable tragedy will bring with it calls to reinstate this part of the “chat control” proposal. It will never go away because far too many governments believe their citizens are obligated to let these governments shoulder-surf whenever they deem it necessary. And about the only thing standing between citizens and that unceasing government desire is end-to-end encryption.

Source: EU Pitched Client-Side Scanning By Targeting Certain EU Residents With Misleading Ads | Techdirt

As soon as you read that legislation is ‘for the kids’ be very very wary – as it’s usually for something completely beyond that remit. And this kind of legislation is the installation of Big Brother on every single communications line you use.

YouTube is cracking down on ad blockers globally. Time to go to the next video site. Vimeo, are you listening?

YouTube is no longer preventing just a small subset of its userbase from accessing its videos if they have an ad blocker. The platform has gone all out in its fight against the use of add-ons, extensions and programs that prevent it from serving ads to viewers around the world, it confirmed to Engadget. “The use of ad blockers violate YouTube’s Terms of Service,” a spokesperson told us. “We’ve launched a global effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad free experience. Ads support a diverse ecosystem of creators globally and allow billions to access their favorite content on YouTube.”

YouTube started cracking down on the use of ad blockers earlier this year. It initially showed pop-ups to users telling them that it’s against the website’s TOS, and then it put a timer on those notifications to make sure people read it. By June, it took on a more aggressive approach and warned viewers that they wouldn’t be able to play more than three videos unless they disable their ad blockers. That was a “small experiment” meant to urge users to enable ads or to try YouTube Premium, which the website has now expanded to its entire userbase. Some people can’t even play videos on Microsoft Edge and Firefox browsers even if they don’t have ad blockers, according to Android Police, but we weren’t able to replicate that behavior. [Note –  I was!]

People are unsurprisingly unhappy about the development and have taken to social networks like Reddit to air their grievances. If they don’t want to enable ads, after all, the only way they can watch videos with no interruptions is to pay for a YouTube Premium subscription. Indeed, the notification viewers get heavily promotes the subscription service. “Ads allow YouTube to stay free for billions of users worldwide,” it says. But with YouTube Premium, viewers can go ad-free, and “creators can still get paid from [their] subscription.”

[…]

Source: YouTube is cracking down on ad blockers globally

It doesn’t help YouTube much that the method they have of detecting your ad blocker basically comes down to using spyware. Source: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Mass lawsuit against Apple over throttled and broken iPhone batteries can go ahead, London tribunal rules

Apple Inc (AAPL.O) on Wednesday lost a bid to block a mass London lawsuit worth up to $2 billion which accuses the tech giant of hiding defective batteries in millions of iPhones.

The lawsuit was brought by British consumer champion Justin Gutmann on behalf of around 24 million iPhone users in the United Kingdom.

Gutmann is seeking damages from Apple on their behalf of up to 1.6 billion pounds ($1.9 billion) plus interest, with the claim’s midpoint range being 853 million pounds.

His lawyers argued Apple concealed issues with batteries in certain phone models by “throttling” them with software updates and installed a power management tool which limited performance.

Apple, however, said the lawsuit was “baseless” and strongly denied batteries in iPhones were defective, apart from in a small number of iPhone 6s models for which it offered free battery replacements.

[…]

Source: Mass lawsuit against Apple over iPhone batteries can go ahead, London tribunal rules | Reuters

Black 4.0 Is The New Ultrablack paint

Vantablack is a special coating material, moreso than a paint. It’s well-known as one of the blackest possible coatings around, capable of absorbing almost all visible light in its nanotube complex structure. However, it’s complicated to apply, delicate, and not readily available, especially to those in the art world.

It was these drawbacks that led Stuart Semple to create his own incredibly black paint. Over the years, he’s refined the formula and improved its performance, steadily building a greater product available to all. His latest effort is Black 4.0, and it’s promising to be the black paint to dominate all others.

 

Back in Black

This journey began in a wonderfully spiteful fashion. Upon hearing that one Anish Kapoor had secured exclusive rights to be the sole artistic user of Vantablack, he determined that something had to be done. Seven years ago, he set out to create his own ultra black paint that would far outperform conventional black paints on the market. Since his first release, he’s been delivering black paints that suck in more light and just simply look blacker than anything else out there.

Black 4.0 has upped the ante to a new level. Speaking to Hackaday, Semple explained the performance of the new paint, being sold through his Culture Hustle website. “Black 4.0 absorbs an astonishing 99.95% of visible light which is about as close to full light absorption as you’ll ever get in a paint,” said Semple. He notes this outperforms Vantablack’s S-Vis spray on product which only achieves 99.8%, as did his previous Black 3.0 paint. Those numbers are impressive, and we’d dearly love to see the new paint put to the test against other options in the ultra black market.

It might sound like mere fractional percentages, but it makes a difference. In sample tests, the new paint is more capable of fun visual effects since it absorbs yet more light. Under indoor lighting conditions, an item coated in Black 4.0 can appear to have no surface texture at all, looking to be a near-featureless black hole. Place an object covered in Black 4.0 on a surface coated in the same, and it virtually disappears. All the usual reflections and shadows that help us understand 3D geometry simply get sucked into the overwhelming blackness.

Black 4.0 compared to a typical black acrylic art paint. Credit: Stuart Semple

Beyond its greater light absorption, the paint has also seen a usability upgrade over Semple’s past releases. For many use cases, a single coat is all that’s needed. “It feels much nicer to use, it’s much more stable, more durable, and obviously much blacker,” he says, adding “The 3.0 would occasionally separate and on rare occasions collect little salt crystals at the surface, that’s all gone now.”

The added performance comes down to a new formulation of the paint’s “super-base” resin, which carries the pigment and mattifying compounds that give the paint its rich, dreamy darkness. It’s seen a few ingredient substitutions compared to previous versions, but a process change also went a long way to creating an improved product. “The interesting thing is that although all that helped, it was the process we used to make the paint that gave us the breakthrough, the order we add things, the way we mix them, and the temperature,” Semple told Hackaday.

The ultra black paint has a way of making geometry disappear. Credit: Stuart Semple

Black 4.0 is more robust than previous iterations, but it’s still probably not up to a full-time life out in the elements, says Semple. You could certainly coat a car in it, for example, but it probably wouldn’t hold up in the long term. He’s particularly excited for applications in astronomy and photography, where the extremely black paint can help catch light leaks and improve the performance of telescopes and cameras. It’s also perfect for creating an ultra black photographic backdrop, too.

No special application methods are required; Black 4.0 can be brush painted just like its predecessors. Indeed, it absorbs so much light that you probably don’t need to worry as much about brush marks as you usually would. Other methods, like using rollers or airbrushes, are perfectly fine, too.

Creating such a high-performance black paint didn’t come without challenges, either. Along the way, Semple contended with canisters of paint exploding, legal threats from others in the market, and one of the main scientists leaving the project. Wrangling supplies of weird and wonderful ingredients was understandably difficult, too.  Nonetheless, he persevered, and has now managed to bring the first batches to market.

The first batches ship in November, so if you’re eager to get some of the dark stuff, you’d better move quick. It doesn’t come cheap, but you’re always going to pay more for something claiming to be the world’s best. If you’ve got big plans, fear not—this time out, Semple will sell the paint in huge bulk 1 liter and 6 liter containers if you really need a job lot. Have fun out there, and if you do something radical, you know who to tell about it.

Source: Black 4.0 Is The New Ultrablack | Hackaday

Posted in Art

Researchers devise method using mirrors to monitor nuclear stockpiles offsite

Researchers say they have developed a method to remotely track the movement of objects in a room using mirrors and radio waves, in the hope it could one day help monitor nuclear weapons stockpiles.

According to the non-profit org International Campaign to Abolish Nuclear Weapons, nine countries, including Russia, the United States, China, France, the United Kingdom, Pakistan, India, Israel and North Korea collectively own about 12,700 nuclear warheads.

Meanwhile, over 100 nations have signed the United Nations’ Treaty on the Prohibition of Nuclear Weapons, promising to not “develop, test, produce, acquire, possess, stockpile, use or threaten to use” the tools of mass destruction. Tracking signs of secret nuclear weapons development, or changes in existing warhead caches, can help governments identify entities breaking the rules.

A new technique devised by a team of researchers led by the Max Planck Institute for Security and Privacy (MPI-SP) aims to remotely monitor the removal of warheads stored in military bunkers. The scientists installed 20 adjustable mirrors and two antennae to monitor the movement of a blue barrel stored in a shipping container. One antenna emits radio waves that bounce off each mirror to create a unique reflection pattern detected by the other antenna.

The signals provide information on the location of objects in the room. Moving the objects or mirrors will produce a different reflection pattern. Experiments showed that the system was sensitive enough to detect whether the blue barrel had shifted by just a few millimetres. Now, the team reckons that it could be applied to monitor whether nuclear warheads have been removed from stockpiles.

At this point, readers may wonder why this tech is proposed for the job when CCTV, or Wi-Fi location, or any number of other observation techniques could do the same job.

The paper explains that the antenna-and-mirror technique doesn’t require secure communication channels or tamper-resistant sensor hardware. The paper’s authors argue it is also “robust against major physical and computational attacks.”

“Seventy percent of the world’s nuclear weapons are kept in storage for military reserve or awaiting dismantlement,” Sebastien Philippe, co-author of a research paper published in Nature Communications. Philippe is an associate research scholar at the School of Public and International Affairs at Princeton University, explained.

“The presence and number of such weapons at any given site cannot be verified easily via satellite imagery or other means that are unable to see into the storage vaults. Because of the difficulties to monitor them, these 9,000 nuclear weapons are not accounted for under existing nuclear arms control agreements. This new verification technology addresses this long-standing challenge and contributes to future diplomatic efforts that would seek to limit all nuclear weapon types,” he said in a statement.

In practice, officials from and organisation such as UN-led International Atomic Energy Agency, which promotes peaceful uses of nuclear energy, could install the system in a nuclear bunker and measure the radio waves reflecting off its mirrors. The unique fingerprint signal can then be stored in a database.

They could later ask the government controlling the nuclear stockpile to measure the radio wave signal recorded by its detector antenna and compare it to the initial result to check whether any warheads have been moved.

If both measurements are the same, the nuclear weapon stockpile has not been tampered with. But if they’re different, it shows something is afoot. The method is only effective if the initial radio fingerprint detailing the original configuration of the warheads is kept secret, however.

Unfortunately, it’s not quite foolproof, considering adversaries could technically use machine learning algorithms to predict how the positions of the mirrors generate the corresponding radio wave signal detected by the antenna.

“With 20 mirrors, it would take eight weeks for an attacker to decode the underlying mathematical function,” said Johannes Tobisch, co-author of the study and a researcher at the MPI-SP. “Because of the scalability of the system, it’s possible to increase the security factor even more.”

To prevent this, the researchers said that the verifier and prover should agree to send back a radio wave measurement within a short time frame, such as within a minute or so. “Beyond nuclear arms control verification, our inspection system could find application in the financial, information technology, energy, and art sectors,” they concluded in their paper.

“The ability to remotely and securely monitor activities and assets is likely to become more important in a world that is increasingly networked and where physical travel and on-site access may be unnecessary or even discouraged.”

Source: Researchers devise new method to monitor nuclear stockpiles • The Register

Judge dismisses most of artists’ AI copyright lawsuits against Midjourney, Stability AI

judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies’ generative artificial intelligence systems.

U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. The judge said the artists could file an amended complaint against the two companies, whose systems utilize Stability’s Stable Diffusion text-to-image technology.

Orrick also dismissed McKernan and Ortiz’s copyright infringement claims entirely. The judge allowed Andersen to continue pursuing her key claim that Stability’s alleged use of her work to train Stable Diffusion infringed her copyrights.

The same allegation is at the heart of other lawsuits brought by artists, authors and other copyright owners against generative AI companies.

“Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture,” Orrick said.

The artists’ attorneys Joseph Saveri and Matthew Butterick said in a statement that their “core claim” survived, and that they were confident that they could address the court’s concerns about their other claims in an amended complaint to be filed next month.

A spokesperson for Stability declined to comment on the decision. Representatives for Midjourney and DeviantArt did not immediately respond to requests for comment.

The artists said in their January complaint that Stability used billions of images “scraped” from the internet, including theirs, without permission to teach Stable Diffusion to create its own images.

Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists’ copyrights. He allowed the claims to be amended but said he was “not convinced” that allegations based on the systems’ output could survive without showing that the images were substantially similar to the artists’ work.

The judge also dismissed other claims from the artists, including that the companies violated their publicity rights and competed with them unfairly, with permission to refile.

Orrick dismissed McKernan and Ortiz’s copyright claims because they had not registered their images with the U.S. Copyright Office, a requirement for bringing a copyright lawsuit.

The case is Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.

For the artists: Joseph Saveri of Joseph Saveri Law Firm; and Matthew Butterick

For Stability: Paul Schoenhard of Fried Frank Harris Shriver & Jacobson

For Midjourney: Angela Dunning of Cleary Gottlieb Steen & Hamilton

For DeviantArt: Andy Gass of Latham & Watkins

Read more:

Lawsuits accuse AI content creators of misusing copyrighted work

AI companies ask U.S. court to dismiss artists’ copyright lawsuit

US judge finds flaws in artists’ lawsuit against AI companies

Source: Judge pares down artists’ AI copyright lawsuit against Midjourney, Stability AI | Reuters

These suits are absolute nonsense. It’s like suing a person for having seen some art and made something a bit like it. It’s not very surprising that this has been wiped off the table.

Drugmakers Are Set To Pay 23andMe Millions To Access Your DNA – which is also your families DNA

GSK will pay 23andMe $20 million for access to the genetic-testing company’s vast trove of consumer DNA data, extending a five-year collaboration that’s allowed the drugmaker to mine genetic data as it researches new medications.

Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body’s immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK’s new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Source: Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA – Slashdot

So – you paid for a DNA test and it turns out you didn’t think of the privacy aspect at all. Neither did you think up that you gave up your families DNA. Or that you can’t actually change your DNA either. Well done. It’s being spread all over the place. And no, the data is not anonymous – DNA is the most personal information you can give up ever.

Particle Accelerator can now be built on a Chip

Particle accelerators range in size from a room to a city. However, now scientists are looking closer at chip-sized electron accelerators, a new study finds. Potential near-term applications for the technology include radiation therapy for zapping skin cancer and, longer-term, new kinds of laser and light sources.

Particle accelerators generally propel particles within metal tubes or rings. The rate at which they can accelerate particles is limited by the peak fields the metallic surfaces can withstand. Conventional accelerators range in size from a few meters for medical applications to kilometers for fundamental research. The fields they use are often on the scale of millions of volts per meter.

In contrast, electrically insulating dielectric materials (stuff that doesn’t conduct electricity well but does support electrostatic fields well) can withstand light fields thousands of times stronger. This has led scientists to investigate creating dielectric accelerators that rely on lasers to hurl particles.

[…]

physicists fabricated a tiny channel 225 nanometers wide and up to 0.5 millimeters long. An electron beam entered one end of the channel and exited the other end.

The researchers shone infrared laser pulses 250 femtoseconds long on top of the channel to help accelerate electrons down it. Inside the channel, two rows of up to 733 silicon pillars, each 2 micrometers high, interacted with these laser pulses to generate accelerating forces.

The electrons entered the accelerators with an energy of 28,400 electron-volts, traveling at roughly one-third the speed of light. They exited it with an energy of 40,700 electron-volts, a 43 percent boost in energy.

This new type of particle accelerator can be built using standard cleanroom techniques, such as electron beam lithography. “This is why we think that our results represent a big step forward,” Hommelhoff says. “Everyone can go ahead and start engineering useful machines from this.”

[…]

Applications for these nanophotonic electron accelerators depend on the energies they can reach. Electrons of up to about 300,000 electron-volts are typical for electron microscopy, Hommelhoff says. For treatment of skin cancer, 10 million electron-volt electrons are needed. Whereas such medical applications currently require an accelerator 1 meter wide, as well as additional large, heavy and expensive parts to help drive the accelerator, “we could in principle get rid of both and have just a roughly 1-centimeter chip with a few extra centimeters for the electron source,” adds study lead author Tomáš Chlouba, a physicist at the University of Erlangen-Nuremberg in Germany.

Applications such as synchrotron light sources, free electron lasers, and searches for lightweight dark matter appear with billion electron-volt electrons. With trillion electron-volt electrons, high-energy colliders become possible, Hommelhoff says.

The scientists note there are many ways to improve their device beyond their initial proof-of-concept structures. They now aim to experiment with greater acceleration and higher electron currents to help enable applications, as well as boosting output by fabricating many accelerator channels next to each other that can all be driven by the same laser pulses.

In addition, although the new study experimented with structures made from silicon due to the relative ease of working with it, “silicon is not really a high-damage threshold material,” Hommelhoff says. Structures made of glass or other materials may allow much stronger laser pulses and thus more powerful acceleration, he says.

The researchers are interested in building a small-scale accelerator, “maybe with skin cancer treatment applications in mind first,” Hommelhoff says. “This is certainly something that we should soon transfer to a startup company.”

The scientists detailed their findings in the 19 October issue of the journal Nature.

Source: Particle Accelerator on a Chip Hits Penny-Size – IEEE Spectrum

Google CEO Defends Paying $26b in 2021 to Remain Top Search Engine

Google CEO Sundar Pichai upheld the company’s decision to pay out billions of dollars to remain the top global search engine at the U.S. anti-trust trial on Monday, according to a report from The Wall Street Journal. Pichai claimed he tried to give users a “seamless and easy” experience, even if it meant paying Apple and other tech companies an exorbitant fee.

The U.S. Department of Justice is arguing that Google created the building blocks to hold a monopoly over the market, but Pichai disagrees, saying the company is the dominant search engine because it is better than its competitors.

“We realized early on that browsers are critical to how people are able to navigate and use the web,” Pichai said during questioning, as reported by The Journal. “It became very clear early on that if you make the user’s experience better, they would use the web more, they would enjoy using the web more, and they would search more in Google as well.”

Pichai testified that Google’s payments to phone companies and manufacturers were meant to push them toward more security upgrades and not just enabling Google to be the primary search engine.

Internal emails between Pichai and his colleagues in 2007 were shared during the cross-examination revealing Google’s insistence to be Apple’s default search engine. Pichai says he was worried about being the only search engine and requested a Yahoo backup version.

Google paid Apple a reported $18 billion to remain the default search engine on its Macs, iPhones, and iPads in 2021, and paid tech companies a grand total of $26 billion in 2021 alone, according to court documents.

[…]

Source: Google CEO Defends Paying Billions to Remain Top Search Engine

Apple says BMW wireless chargers really are messing with iPhone 15s

Users have been reporting that their iPhone 15’s NFC chips were failing after using BMW’s in-car wireless charging, but until now, Apple hasn’t addressed the complaints. That seems to have changed as MacRumors reported this week that an Apple internal memo to third-party repair providers says a software update later this year should prevent a “small number” of in-car wireless chargers from “temporarily” disabling iPhone 15 NFC chips.

Apple reportedly says that until the fix comes out, anyone who experiences this should not use the wireless charger in their car. Users have been complaining about BMW wireless chargers breaking Apple Pay and the BMW digital key feature in posts on Reddit, Apple’s Support community, and MacRumors’ own forums.

BMW seemed to acknowledge the issue early this month when the BMW UK X account replied to a complaint earlier this month saying the company is working with Apple to investigate the issue. There’s no easy way to know which models are affected, so for now, if you have a BMW or a Toyota Supra with a wireless charger, it’s probably best to just avoid using it until the problem is fixed.

Source: Apple says BMW wireless chargers really are messing with iPhone 15s – The Verge

IoT standard Matter 1.2 released

[…] Matter, version 1.2, is now available for device makers and platforms to build into their products. It is packed with nine new device types, revisions, and additions to existing categories, core improvements to the specification and SDK, and certification and testing tools. The Matter 1.2 certification program is now open and members expect to bring these enhancements and new device types to market later this year and into 2024 and beyond.

[…]

The new device types supported in Matter 1.2 include:

  1. Refrigerators – Beyond basic temperature control and monitoring, this device type is also applicable to other related devices like deep freezers and even wine and kimchi fridges.
  2. Room Air Conditioners – While HVAC and thermostats were already part of Matter 1.0, stand alone Room Air Conditioners with temperature and fan mode control are now supported.
  3. Dishwashers – Basic functionality is included, like remote start and progress notifications. Dishwasher alarms are also supported, covering operational errors such as water supply and drain, temperature, and door lock errors.
  4. Laundry Washers – Progress notifications, such as cycle completion, can be sent via Matter. Dryers will be supported in a future Matter release.
  5. Robotic Vacuums – Beyond the basic features like remote start and progress notifications, there is support for key features like cleaning modes (dry vacuum vs wet mopping) and additional status details (brush status, error reporting, charging status).
  6. Smoke & Carbon Monoxide Alarms – These alarms will support notifications and audio and visual alarm signaling. Additionally, there is support for alerts about battery status and end-of-life notifications. These alarms also support self-testing. Carbon monoxide alarms support concentration sensing, as an additional data point.
  7. Air Quality Sensors –  Supported sensors can capture and report on: PM1, PM 2.5, PM 10, CO2, NO2, VOC, CO, Ozone, Radon, and Formaldehyde. Furthermore, the addition of the Air Quality Cluster enables Matter devices to provide AQI information based on the device’s location.
  8. Air Purifiers – Purifiers utilize the Air Quality Sensor device type to provide sensing information and also include functionality from other device types like Fans (required) and Thermostats (optional). Air purifiers also include consumable resource monitoring, enabling notifications on filter status (both HEPA and activated carbon filters are supported in 1.2).
  9. Fans –Matter 1.2 includes support for fans as a separate, certifiable device type. Fans now support movements like rock/oscillation and new modes like natural wind and sleep wind. Additional enhancements include the ability to change the airflow direction (forward and reverse) and step commands to change the speed of airflow. […]

Core improvements to the Matter 1.2 specification include:

  • Latch & Bolt Door Locks – Enhancements for European markets that capture the common configuration of a combined latch and bolt lock unit.
  • Device Appearance – Added description of device appearance, so that devices can describe their color and finish. This will enable helpful representations of devices across clients.
  • Device & Endpoint Composition – Devices can now be hierarchically composed from complex endpoints allowing for accurate modeling of appliances, multi-unit switches, and multi-light fixtures.
  • Semantic Tags – Provide an interoperable way to describe the location and semantic functions of generic Matter clusters and endpoints to enable consistent rendering and application across the different clients. For example, semantic tags can be used to represent the location and function of each button on a multi-button remote control.
  • Generic Descriptions of Device Operational States – Expressing the different operational modes of a device in a generic way will make it easier to generate new device types in future revisions of Matter and ensure their basic support across various clients.
Under-the-Hood Enhancements: Matter SDK & Test Harness

Matter 1.2 brings important enhancements in the testing and certification program which helps companies bring products – hardware, software, chipsets and apps – to market faster. These improvements will benefit the wider developer community and ecosystem around Matter.

  • New Platform Support in SDK – Matter 1.2 SDK is now available for new platforms providing more ways for developers to build new products for Matter.
  • Enhancements to the Matter Test Harness – The Test Harness is a critical piece for ensuring the specification and its features are being implemented correctly. The Test Harness is now available via open source, making it easier for Matter developers to contribute to the tools (to make them better), and to ensure they are working with the latest version (with all features and bug fixes.

[…]

Developers interested in learning more about these enhancements can access the following resources:

[…]

Source: Matter 1.2 Arrives with Nine New Device Types & – CSA-IOT

iLeakage hack can force iOS and macOS browsers to divulge passwords and much more

Researchers have devised an attack that forces Apple’s Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices.

 

iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a wide corpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers—primarily Intel and, to a lesser extent, AMD—scrambling to devise mitigations.

Exploiting WebKit on Apple silicon

The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker’s choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox—when a target is logged in—and a password as it’s being autofilled by a credential manager. Once visited, the iLeakage site requires about five minutes to profile the target machine and, on average, roughly another 30 seconds to extract a 512-bit secret, such as a 64-character string.

Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Enlarge / Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Kim, et al.

“We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution,” the researchers wrote on an informational website. “In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.”

[…]

For the attack to work, a vulnerable computer must first visit the iLeakage website. For attacks involving YouTube, Gmail, or any other specific Web property, a user should be logged into their account at the same time the attack site is open. And as noted earlier, the attacker website needs to spend about five minutes probing the visiting device. Then, using the window.open JavaScript method, iLeakage can cause the browser to open any other site and begin siphoning certain data at anywhere from 24 to 34 bits per second.

[…]

iLeakage is a practical attack that requires only minimal physical resources to carry out. The biggest challenge—and it’s considerable—is the high caliber of technical expertise required. An attacker needs to not only have years of experience exploiting speculative execution vulnerabilities in general but also have fully reverse-engineered A- and M-series chips to gain insights into the side channel they contain. There’s no indication that this vulnerability has ever been discovered before, let alone actively exploited in the wild.

That means the chances of this vulnerability being used in real-world attacks anytime soon are slim, if not next to zero. It’s likely that Apple’s scheduled fix will be in place long before an iLeakage-style attack site does become viable.

Source: Hackers can force iOS and macOS browsers to divulge passwords and much more | Ars Technica

Hackers Target European Government With Roundcube Webmail Bug

Winter Vivern, believed to be a Belarus-aligned hacker, attacked European government entities and a think tank starting on Oct. 11, according to an Ars Technica report Wednesday. ESET Research discovered the hack that exploited a zero-day vulnerability in Roundcube, a webmail server with millions of users, and allowed the pro-Russian group to exfiltrate sensitive emails.

Roundcube patched the XSS vulnerability on Oct. 14, two days after ESET Research reported it. Winter Vivern sent malicious code to users disguised in an innocent-looking email from team.management@outlook.com. Users simply viewed the message in a web browser, and the hacker could access all their emails. Winter Vivern is a cyberespionage group that has been active since at least 2020 targeting governments in Europe and Central Asia.

“Despite the low sophistication of the group’s toolset, it is a threat to governments in Europe because of its persistence, very regular running of phishing campaigns,” said Matthieu Faou, a malware researcher at ESET, in a post.

Roundcube released an update for multiple versions of its software on Oct. 16 fixing the cross-site scripting vulnerabilities. Despite the patch and known vulnerabilities in older versions, many applications don’t get updated by users, says Faou.

[…]

Source: Hackers Target European Government With Roundcube Webmail Bug

Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Last week, privacy advocate (and very occasional Reg columnist) Alexander Hanff filed a complaint with the Irish Data Protection Commission (DPC) decrying YouTube’s deployment of JavaScript code to detect the use of ad blocking extensions by website visitors.

On October 16, according to the Internet Archives’ Wayback Machine, Google published a support page declaring that “When you block YouTube ads, you violate YouTube’s Terms of Service.”

“If you use ad blockers,” it continues, “we’ll ask you to allow ads on YouTube or sign up for YouTube Premium. If you continue to use ad blockers, we may block your video playback.”

YouTube’s Terms of Service do not explicitly disallow ad blocking extensions, which remain legal in the US [PDF], in Germany, and elsewhere. But the language says users may not “circumvent, disable, fraudulently engage with, or otherwise interfere with any part of the Service” – which probably includes the ads.

Image of 'Ad blockers are not allowed' popup

Image of ‘Ad blockers are not allowed’ popup – Click to enlarge

YouTube’s open hostility to ad blockers coincides with the recent trial deployment of a popup notice presented to web users who visit the site with an ad-blocking extension in their browser – messaging tested on a limited audience at least as far back as May.

In order to present that popup YouTube needs to run a script, changed at least twice a day, to detect blocking efforts. And that script, Hanff believes, violates the EU’s ePrivacy Directive – because YouTube did not first ask for explicit consent to conduct such browser interrogation.

[…]

Asked how he hopes the Irish DPC will respond, Hanff replied via email, “I would expect the DPC to investigate and issue an enforcement notice to YouTube requiring them to cease and desist these activities without first obtaining consent (as per [Europe’s General Data Protection Regulation (GDPR)] standard) for the deployment of their spyware detection scripts; and further to order YouTube to unban any accounts which have been banned as a result of these detections and to delete any personal data processed unlawfully (see Article 5(1) of GDPR) since they first started to deploy their spyware detection scripts.”

Hanff’s use of strikethrough formatting acknowledges the legal difficulty of using the term “spyware” to refer to YouTube’s ad block detection code. The security industry’s standard defamation defense terminology for such stuff is PUPs, or potentially unwanted programs.

[…]

Hanff’s contention that ad-blocker detection without consent is unlawful in the EU was challenged back in 2016 by the maker of a detection tool called BlockAdblock. The software maker’s argument is that JavaScript code is not stored in the way considered in Article 5(3), which the firm suggests was intended for cookies.

Hanff disagrees, and maintains that “The Commission and the legislators have been very clear that any access to a user’s terminal equipment which is not strictly necessary for the provision of a requested service, requires consent.

“This is also bound by CJEU Case C-673/17 (Planet49) from October 2019 which *all* Member States are legally obligated to comply with, under the [Treaty on the Functioning of the European Union] – there is no room for deviation on this issue,” he elaborated.

“If a script or other digital technology is strictly necessary (technically required to deliver the requested service) then it is exempt from the consent requirements and as such would pose no issue to publishers engaging in legitimate activities which respect fundamental rights under the Charter.

“It is long past time that companies meet their legal obligations for their online services,” insisted Hanff. “This has been law since 2002 and was further clarified in 2009, 2012, and again in 2019 – enough is enough.”

Google did not respond to a request for comment.

Source: Privacy advocate challenges YouTube’s ad blocking detection • The Register

Airbus commissions three wind-powered ships

The plane-maker on Thursday revealed it has “commissioned shipowner Louis Dreyfus Armateurs to build, own and operate these new, highly efficient vessels that will enter into service from 2026.”

The ships will have conventional engines that run on maritime diesel oil and e-methanol, the latter fuel made with a process that produces less CO2 than other efforts. Many ships run on heavy fuel oil, the gloopiest, dirtiest, and cheapest of the fuel oils. Airbus has therefore gone out of its way with the choice of diesel and e-methanol.

The ships will also feature half a dozen Flettner rotors, rotating cylinders that produce the Magnus effect – a phenomenon that produces lift thanks to pressure differences on either side of a rotating object. The rotors were invented over a century ago and are generating renewed interest as they reduce ships’ fuel requirements.

Here’s what they’ll look like on Airbus’s boats.

Airbus's future ocean transports

Airbus’s future ocean transports – Click to enlarge

Airbus expects its three vessels to enter service from 2026 and has calculated they will reduce its average annual transatlantic CO2 emissions from 68,000 to 33,000 tonnes by 2030.[…]

The craft will have capacity to move around seventy 40-foot containers and six single-aisle aircraft sub assembly sets – wings, fuselage, engine pylons, horizontal and vertical tail planes. Airbus’s current ships can only move three or four of those sets.

The ships will most often travel from Saint-Nazaire, France, to an A320 assembly line in Mobile, Alabama. […]

Source: Airbus commissions three wind-powered ships • The Register

Apple’s MAC Address Privacy Feature Has Never Worked

Ever since Apple re-branded as the “Privacy” company several years back, it’s been rolling out features designed to show its commitment to protecting users. Yet while customers might feel safer using an iPhone, there’s already plenty of evidence that Apple’s branding efforts don’t always match the reality of its products. In fact, a lot of its privacy features don’t actually seem to work.

Case in point: new research shows that one of Apple’s proffered privacy tools—a feature that was supposed to anonymize mobile users’ connections to Wifi—is effectively “useless.” In 2020, Apple debuted a feature that, when switched on, was supposed to hide an iPhone user’s media access control—or MAC—address. When a device connects to a WiFi network, it must first send out its MAC address so the network can identify it; when the same MAC address pops up in network after network, it can be used to by network observers to identify and track a specific mobile user’s movements.

Apple’s feature was supposed to provide randomized MAC addresses for users as a way of stop this kind of tracking from happening. But, apparently, a bug in the feature persisted for years that made the feature effectively useless.

According to a new report from Ars Technica, researchers recently tested the feature to see if it actually concealed their MAC addresses, only to find that it didn’t do that at all. Ars writes:

Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

One of the researchers behind the discovery of the vulnerability, Tommy Mysk, told Ars that, from the jump, “this feature was useless because of this bug,” and that, try as they might, he “couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

What Apple’s justification for advertising a feature that just plainly does not work is, I’m not sure. Gizmodo reached out to the company for comment and will update this story if they respond. A recent update, iOS 17.1, apparently patches the problem and ensures that the feature actually works.

Source: Apple’s MAC Address Privacy Feature Has Never Worked

Android 14 Storage Bug: Users with multiple profiles Locked Out of Devices

Android 14, the latest operating system from Google, is facing a major storage bug that is causing users to be locked out of their devices. This issue is particularly affecting users who utilize the “multiple profiles” feature. Reports suggest that the bug is comparable to being hit with “ransomware,” as users are unable to access their device storage.

Initially, it was believed that this bug was limited to the Pixel 6, but it has since been discovered that it impacts a wider range of devices upgrading to Android 14. This includes the Pixel 6, 6a, 7, 7a, Pixel Fold, and Pixel Tablet. The Google issue tracker for this bug has garnered over 350 replies, but there has been no response from Google so far. The bug has been assigned the medium priority level of “P2” and remains unassigned, indicating that no one is actively investigating it.

Users who have encountered this storage bug have shared log files containing concerning messages such as “Failed to open directory /data/media/0: Structure needs cleaning.” This issue leads to various problematic situations, with some users experiencing boot loops, others stuck on a “Pixel is starting…” message, and some unable to take screenshots or access their camera app due to the lack of storage. Users are also unable to view files on their devices from a PC over USB, and the System UI and Settings repeatedly crash. Essentially, without storage, the device becomes practically unusable.

Android’s user-profile system, designed to accommodate multiple users and separate work and personal profiles, appears to be the cause of this rarely encountered bug. Users have reported that the primary profile, which is typically the most important one, becomes locked out.

Source: Android 14 Storage Bug: Users Locked Out of Devices

Google turned ANC earbuds into heart rate sensor

Google today detailed its research into audioplethysmography (APG) that adds heart rate sensing capabilities to active noise canceling (ANC) headphones and earbuds “with a simple software upgrade.”

Google says the “ear canal [is] an ideal location for health sensing” given that the deep ear artery “forms an intricate network of smaller vessels that extensively permeate the auditory canal.”

This audioplethysmography approach works by “sending a low intensity ultrasound probing signal through an ANC headphone’s speakers.”

This signal triggers echoes, which are received via on-board feedback microphones. We observe that the tiny ear canal skin displacement and heartbeat vibrations modulate these ultrasound echoes.

A model that Google created works to process that feedback into a heart rate reading, as well as heart rate variability (HRV) measurement. This technique works even with music playing and “bad earbuds seals.” However, it was impacted by body motion, and Google countered with a multi-tone approach that serves as a calibration tool to “find the best frequency that measures heart rate, and use only the best frequency to get high-quality pulse waveform.”

Google performed two sets of studies with 153 people that found APG “achieves consistently accurate heart rate (3.21% median error across participants in all activity scenarios) and heart rate variability (2.70% median error in inter-beat interval) measurements.”

Compared to existing HR sensors, it’s not impacted by skin tones. Ear canal size and “sub-optimal seal conditions” also do not impact accuracy. Google believes this is a better approach than putting traditional photoplethysmograms (PPG) and electrocardiograms (ECG) sensors, as well as a microcontroller, in headphones/earbuds:

…this sensor mounting paradigm inevitably adds cost, weight, power consumption, acoustic design complexity, and form factor challenges to hearables, constituting a strong barrier to its wide adoption.

Google closes on:

APG transforms any TWS ANC headphones into smart sensing headphones with a simple software upgrade, and works robustly across various user activities. The sensing carrier signal is completely inaudible and not impacted by music playing. More importantly, APG represents new knowledge in biomedical and mobile research and unlocks new possibilities for low-cost health sensing.

“APG is the result of collaboration across Google Health, product, UX and legal teams,” so this coming to Pixel Buds is far from guaranteed at this point.

Source: Google turned ANC earbuds into heart rate sensor

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security