About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

European Commission in pocket of US Big Tech  to massively rollback digital protections in Digital domain

The European Commission has been accused of “a massive rollback” of the EU’s digital rules after announcing proposals to delay central parts of the Artificial Intelligence Act and water down its landmark data protection regulation.

If agreed, the changes would make it easier for tech firms to use personal data to train AI models without asking for consent, and try to end “cookie banner fatigue” by reducing the number times internet users have to give their permission to being tracked on the internet.

The commission also confirmed the intention to delay the introduction of central parts of the AI Act, which came into force in August 2024 and does not yet fully apply to companies.

Companies making high-risk AI systems, namely those posing risks to health, safety or fundamental rights, such as those used in exam scoring or surgery, would get up to 18 months longer to comply with the rules.

The plans were part of the commission’s “digital omnibus”, which tries to streamline tech rules including GDPR, the AI Act, the ePrivacy directive and the Data Act.

After a long period of rule-making, the EU agenda has shifted since the former Italian prime minister Mario Draghi warned in a report last autumn that Europe had fallen behind the US and China in innovation and was weak in the emerging technologies that would drive future growth, such as AI. The EU has also come under heavy pressure from the Trump administration to rein in digital laws.

[…]

They are part of the bloc’s wider drive for “simplification”, with plans under way to scale back regulation on the environment, company reporting on supply chains and agriculture. Like these other proposals, the digital omnibus will need to be approved by EU minsters and the European parliament.

European Digital Rights (EDRi), a pan-European network of NGOs, described the plans as “a major rollback of EU digital protections” that risked dismantling “the very foundations of human rights and tech policy in the EU”.

In particular, it said that changes to GDPR would allow “the unchecked use of people’s most intimate data for training AI systems” and that a wide range of exemptions proposed to online privacy rules would mean businesses would be able to read data on phones and browsers without asking.

European business groups welcomed the proposals but said they did not go far enough. A representative from the Computer and Communications Industry Association, whose members include Amazon, Apple, Google and Meta, said: “Efforts to simplify digital and tech rules cannot stop here.” The CCIA urged “a more ambitious, all-encompassing review of the EU’s entire digital rulebook”.

Critics of the shake-up included the EU’s former commissioner for enterprise, Thierry Breton, who wrote in the Guardian that Europe should resist attempts to unravel its digital rulebook “under the pretext of simplification or remedying an alleged ‘anti-innovation’ bias. No one is fooled over the transatlantic origin of these attempts.”

[…]

Source: European Commission accused of ‘massive rollback’ of digital protections | European Commission | The Guardian

Yes, the simplification change allowing cookie consent to be stored in the browser is a good one. Allowing AI systems to run amok without proper oversight, especially in high risk domains and allowing large companies to do so without rules only benefits the players that can afford to play in these domains: namely the far right by introducing more mass surveillance tools and big (US) tech.

Manipulating the meeting notetaker: The rise of AI summarization optimization

These days, the most important meeting attendee isn’t a person: It’s the AI notetaker.

This system assigns action items and determines the importance of what is said. If it becomes necessary to revisit the facts of the meeting, its summary is treated as impartial evidence.

But clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarization and importance than to their colleagues. As a result, you can expect some meeting attendees to use language more likely to be captured in summaries, timing their interventions strategically, repeating key points, and employing formulaic phrasing that AI models are more likely to pick up on. Welcome to the world of AI summarization optimization (AISO).

Optimizing for algorithmic manipulation

AI summarization optimization has a well-known precursor: SEO.

Search-engine optimization is as old as the World Wide Web. The idea is straightforward: Search engines scour the internet digesting every possible page, with the goal of serving the best results to every possible query. The objective for a content creator, company, or cause is to optimize for the algorithm search engines have developed to determine their webpage rankings for those queries. That requires writing for two audiences at once: human readers and the search-engine crawlers indexing content. Techniques to do this effectively are passed around like trade secrets, and a $75 billion industry offers SEO services to organizations of all sizes.

More recently, researchers have documented techniques for influencing AI responses, including large-language model optimization (LLMO) and generative engine optimization (GEO). Tricks include content optimization — adding citations and statistics — and adversarial approaches: using specially crafted text sequences. These techniques often target sources that LLMs heavily reference, such as Reddit, which is claimed to be cited in 40% of AI-generated responses. The effectiveness and real-world applicability of these methods remains limited and largely experimental, although there is substantial evidence that countries such as Russia are actively pursuing this.

AI summarization optimization follows the same logic on a smaller scale. Human participants in a meeting may want a certain fact highlighted in the record, or their perspective to be reflected as the authoritative one. Rather than persuading colleagues directly, they adapt their speech for the notetaker that will later define the “official” summary. For example:

  • “The main factor in last quarter’s delay was supply chain disruption.”
  • “The key outcome was overwhelmingly positive client feedback.”
  • “Our takeaway here is in alignment moving forward.”
  • “What matters here is the efficiency gains, not the temporary cost overrun.”

The techniques are subtle. They employ high-signal phrases such as “key takeaway” and “action item,” keep statements short and clear, and repeat them when possible. They also use contrastive framing (“this, not that”), and speak early in the meeting or at transition points.

Once spoken words are transcribed, they enter the model’s input. Cue phrases — and even transcription errors — can steer what makes it into the summary. In many tools, the output format itself is also a signal: Summarizers often offer sections such as “Key Takeaways” or “Action Items,” so language that mirrors those headings is more likely to be included. In effect, well-chosen phrases function as implicit markers that guide the AI toward inclusion.

Research confirms this. Early AI summarization research showed that models trained to reconstruct summary-style sentences systematically overweigh such content. Models over-rely on early-position content in news. And models often overweigh statements at the start or end of a transcript, underweighting the middle. Recent work further confirms vulnerability to phrasing-based manipulation: models cannot reliably distinguish embedded instructions from ordinary content, especially when phrasing mimics salient cues.

How to combat AISO

If AISO becomes common, three forms of defense will emerge. First, meeting participants will exert social pressure on one another. When researchers secretly deployed AI bots in Reddit’s r/changemyview community, users and moderators responded with strong backlash calling it “psychological manipulation.” Anyone using obvious AI-gaming phrases may face similar disapproval.

Second, organizations will start governing meeting behavior using AI: risk assessments and access restrictions before the meetings even start, detection of AISO techniques in meetings, and validation and auditing after the meetings.

Third, AI summarizers will have their own technical countermeasures. For example, the AI security company CloudSEK recommends content sanitization to strip suspicious inputs, prompt filtering to detect meta-instructions and excessive repetition, context window balancing to weight repeated content less heavily, and user warnings showing content provenance.

Broader defenses could draw from security and AI safety research: preprocessing content to detect dangerous patterns, consensus approaches requiring consistency thresholds, self-reflection techniques to detect manipulative content, and human oversight protocols for critical decisions. Meeting-specific systems could implement additional defenses: tagging inputs by provenance, weighting content by speaker role or centrality with sentence-level importance scoring, and discounting high-signal phrases while favoring consensus over fervor.

Reshaping human behavior

AI summarization optimization is a small, subtle shift, but it illustrates how the adoption of AI is reshaping human behavior in unexpected ways. The potential implications are quietly profound.

Meetings — humanity’s most fundamental collaborative ritual — are being silently reengineered by those who understand the algorithm’s preferences. The articulate are gaining an invisible advantage over the wise. Adversarial thinking is becoming routine, embedded in the most ordinary workplace rituals, and, as AI becomes embedded in organizational life, strategic interactions with AI notetakers and summarizers may soon be a necessary executive skill for navigating corporate culture.

AI summarization optimization illustrates how quickly humans adapt communication strategies to new technologies. As AI becomes more embedded in workplace communication, recognizing these emerging patterns may prove increasingly important.

Source: Manipulating the meeting notetaker: The rise of AI summarization optimization | CSO Online

Boston Dynamics Robot Dog Is Becoming Standard in Policing

Spot, the four-legged robot from Boston Dynamics Inc., is perhaps best known for its viral dance routines to songs like “Uptown Funk.” But beyond its playful antics, Spot’s ability to climb stairs and open doors signals a potentially controversial role as a policing tool.

Five years after its commercial debut, the 75-pound, German Shepherd-sized robot is increasingly being deployed by local law enforcement to handle armed standoffs, hostage rescues and hazardous materials incidents — situations where sending in a human or a real dog could be life-threatening.

More than 60 bomb squads and SWAT teams in the US and Canada are now using Spot, according to previously unreported data shared by Boston Dynamics with Bloomberg News.

[…]

Spot’s role on law enforcement teams varies. In 2022, it approached a man who had crashed a car trying to kidnap his son in St. Petersburg, Florida, to keep an eye on the situation and see if he was armed. In Massachusetts last year, in two different incidents, it helped assess a chemical waste accident at a middle school in North Andover, and it intervened when a suspect in Hyannis took his mother hostage at knifepoint and fired at officers. Spot was deployed to corner him and police eventually followed with tear gas to apprehend him.

“It did its job,” said trooper John Ragosa, a Massachusetts State Police bomb squad member and the Spot operator assigned to the hostage-rescue mission. “The suspect was stunned, thinking ‘What is this dog?’”

The robot, which starts at around $100,000, can operate autonomously in many cases — performing maintenance checks, detecting gas leaks and inspecting faulty equipment — but still relies on human operators like Ragosa for decision making. Using a tablet that resembles a video game controller, an operator guides the machine while monitoring a live video feed from its onboard camera system. Additional built-in sensors handle navigation and mapping. During high-stakes situations, officers can also view the live feed on larger nearby screens.

Spot’s technology continues to evolve. The company recently added a mode to help Spot navigate slippery spots. And it’s working to help Spot better manipulate objects in the real world.

[…]

Roughly 2,000 Spot units are now in operation globally, Boston Dynamics said. The deployments include organizations such as the Dutch Ministry of Defense and Italy’s national police. While most of the company’s customers are still industrial clients, including manufacturers and utility providers, interest from law enforcement has surged over the past two years, said Brendan Schulman, Boston Dynamics’ vice president of policy and government relations.

[…]

“One of the things about the so-called robot dogs that we are a little wary of is this normalization and this sort of affectionate framing of calling it a dog,” she said. “It’s normalizing that for the public when it’s not actually a dog. It’s another piece of police technology.”

Ryan Calo, a professor at the University of Washington School of Law focusing on robotics law, said that the technology could deepen public skepticism toward law enforcement, and said clear guidelines are critical for safe deployment.

“The unease people feel around robotics is not just a psychological quirk,” he said. “They are disconcerting for a reason. The overuse of robotics in policing will further dehumanize police to the public and break down those community ties that have been so important to policing over so many years.”

[…]

“I don’t think every police officer needs a robot partner,” he said. “But the use of robots in certain situations that have been specified in writing in advance is good. No one wants police to risk their lives or fail to gain situational awareness during an emergency — nor do we want to live in a robotic police state.”

Source: A $100,000 Robot Dog Is Becoming Standard in Policing — and Raising Ethical Alarms

EU proposes doing away with constant cookies requests by setting the “No” in your browser settings

People will no longer be bombarded by constant requests to accept or reject “cookies” when browsing the internet, under proposed changes to the European Union’s strict data privacy laws.

The pop-up prompts asking internet users to consent to cookies when they visit a website are widely seen as a nuisance, undermining the original privacy intentions of the digital rules.

[I don’t think this undermines anything – cookie consent got rid of a LOT of spying and everyone now just automatically clicks on NO or uses addons to do this (well, if you are using Firefox as a browser). The original purpose: stop companies spying has been achieved]

Brussels officials have now tabled changes that would allow people to accept or reject cookies for a six-month period, and potentially set their internet browser to automatically opt-in or out, to avoid being repeatedly asked whether they consent to websites remembering information about their past visits.

Cookies allow websites to keep track of a user’s previous activity, allowing sites to pull up items added to an online shopping cart that were not purchased, or remember whether someone had logged in to an account on the site before, as well as target advertisements.

[…]

Source: EU proposes doing away with constant internet ‘cookies’ requests – The Irish Times

Fortinet confirms second 0-day exploited in the wild in just four days

Fortinet has confirmed that another flaw in its FortiWeb web application firewall has been exploited as a zero-day and issued a patch, just days after disclosing a critical bug in the same product that attackers had found and abused a month earlier.

The new bug, tracked as CVE-2025-58034, is an OS command injection vulnerability that allows authenticated attackers to execute unauthorized code on the underlying system using crafted HTTP requests or CLI commands. Updating FortiWeb devices to the most recent software version fixes the problem.

“Fortinet has observed this to be exploited in the wild,” the vendor said in a Tuesday security advisory that credited Trend Micro researcher Jason McFadyen with finding and reporting the vulnerability.

“Trend Micro has observed attacks in the wild using this flaw with around 2,000 detections so far,” Trend Micro senior threat researcher Stephen Hilt told The Register.

Meanwhile, the US Cybersecurity and Infrastructure Security Agency issued its own alert about the FortiWeb bug on Tuesday, adding it to its Known Exploited Vulnerability catalog and giving federal agencies just seven days to apply the patch. CISA usually sets a 15-day deadline to fix critical patches and a 30-day time limit for implementing high-severity bugs.

“This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise,” America’s cyber defense agency warned.

[…]

Source: Fortinet confirms second 0-day in just four days • The Register

Tokyo Court Finds Cloudflare Liable For All Content it Allows Access to, Verification of all Users of the service and Should Follow Lawyers Requests without Court Verdicts in Manga Piracy Lawsuit

Japanese manga publishers have declared victory over Cloudflare in a long-running copyright infringement liability dispute. Kadokawa, Kodansha, Shueisha and Shogakukan say that Cloudflare’s refusal to stop manga piracy sites, meant they were left with no other choice but to take legal action. The Tokyo District Court rendered its decision this morning, finding Cloudflare liable for damages after it failed to sufficiently prevent piracy.

[…]

After a wait of more than three and a half years, the Tokyo District Court rendered its decision this morning. In a statement provided to TorrentFreak by the publishers, they declare “Victory Against Cloudflare” after the Court determined that Cloudflare is indeed liable for the pirate sites’ activities.

In a statement provided to TorrentFreak, the publishers explain that they alerted Cloudflare to the massive scale of the infringement, involving over 4,000 works and 300 million monthly visits, but their requests to stop distribution were ignored.

“We requested that the company take measures such as stopping the distribution of pirated content from servers under its management. However, Cloudflare continued to provide services to the manga piracy sites even after receiving notices from the plaintiffs,” the group says.

The publishers add that Cloudflare continued to provide services even after receiving information disclosure orders from U.S. courts, leaving them with “no choice but to file this lawsuit.”

Factors Considered in Determining Liability

Decisions in favor of Cloudflare in the United States have proven valuable over the past several years. Yet while the Tokyo District Court considered many of the same key issues, various factors led to a finding of liability instead, the publishers note.

“The judgment recognized that Cloudflare’s failure to take timely and appropriate action despite receiving infringement notices from the plaintiffs, and its negligent continuation of pirated content distribution, constituted aiding and abetting copyright infringement, and that Cloudflare bears liability for damages to the plaintiffs,” they write.

“The judgment, in that regard, attached importance to the fact that Cloudflare, without conducting any identity verification procedures, had enabled a massive manga piracy site to operate ‘under circumstances where strong anonymity was secured,’ as a basis for recognizing the company’s liability.”

[…]

According to Japanese media, Cloudflare plans to appeal the verdict, which was expected. In comments to the USTR last month, Cloudflare referred to a long-running dispute in Japan with the potential to negatively affect future business.

“One particular dispute reflects years of effort by Japan’s government and its publishing industry to impose additional obligations on intermediaries like CDNs,” the company’s submission reads (pdf).

“A fully adjudicated ruling that finds CDNs liable for monetary damages for infringing material would set a dangerous global precedent and necessitate U.S. CDN providers to limit the provision of global services to avoid liability, severely restricting market growth and expansion into Asian Pacific markets.”

Whether that heralds Cloudflare’s exit from the region is unclear.

[…]

Source: Tokyo Court Finds Cloudflare Liable For Manga Piracy in Long-Running Lawsuit * TorrentFreak

KC-135 Refueling Pods Have Been Converted Into Flying Communication Nodes

The Utah Air National Guard demonstrated new capabilities that expand the KC-135 aerial refueling tanker’s ability to also act as an airborne communications and data-sharing node during major exercises in the Pacific earlier this year. Additional datalinks and other systems were packed into heavily modified underwing Multipoint Refueling System (MPRS) pods normally used to send gas to receivers via the probe-and-drogue method. More network connectivity for the U.S. Air Force’s KC-135s, as well as its KC-46s, opens the door to a host of new operational possibilities for those aircraft, including when it comes to controlling drones in flight.

At least one KC-135 from the Utah Air National Guard’s 151st Wing flew with the podded networking suites during this year’s Resolute Force Pacific 25 (REFORPAC 25) exercise.

[…]

the 151st Wing, in cooperation with the AATC, has been at the very forefront of Air Force efforts to advance new communications and data-sharing capabilities for the KC-135, specifically, for some time now. The development of podded systems similar, if not identical to the ones demonstrated at REFORPAC 25, traces back at least to 2021, and builds on years of work before then on roll-on/roll-off packages designed to be installed in the aircraft’s cargo deck.

The Roll-On Beyond Line-of-Sight Enhancement (ROBE) package seen here is among the add-on communications and data-sharing capabilities that has been available for use on the KC-135, as well as other aircraft, for years now already. USAF

A self-contained podded system offers a different degree of flexibility when it comes to loading and unloading from aircraft, as required. A KC-135 can only carry one pod under each wing at a time, so being able to readily swap out ones filled with communications gear for standard MRPS types between missions would be very valuable. Leveraging the established MRPS pod design, which the KC-135 is already cleared to carry, also helps significantly reduce costs and overall time required for integration and flight testing.

[…]

Tanker crews being able to control various tiers of drones, including ones launched in mid-air from their aircraft, is one particularly notable element of this future vision. Those drones could help provide further situational awareness, or even a more active defense against incoming threats, as well as perform other missions, as you can read more about here. A Utah Air National Guard KC-135 demonstrated just this kind of capability in a previous test also involving a Kratos Unmanned Tactical Aerial Platform-22, or UTAP-22, also known as the Mako, a low-cost loyal wingman-type drone, back in 2021.

[…]

The pod’s line-of-sight links could even be used to control future stealthy collaborative combat aircraft (CCA) type drones and/or send and receive data from stealthy crewed aircraft, like F-22 and F-35 fighters and the future B-21 Raider bombers. Beyond the immediate value of that information exchange for tankers, including when it comes to survivability, this could open up additional possibilities for data fusion and rebroadcasting. If the pods can communicate with the low probability of interception/low probability of detection (LPI/LPD) datalinks that stealthy aircraft use, such as the Multifunction Advanced Data Link (MADL) and Intra-Fighter Data Link (IFDL), and more general-purpose ones, they could turn tankers into invaluable ‘translator’ nodes between various waveforms. Basically, they could allow aircraft with disparate datalink architectures to share data with each other, with the KC-135 acting as a forward fusion and rebroadcasting ‘gateway.’ The tankers could also use their beyond-line-of-sight links to share critical information globally in near real time. The fact that they would already be operating forward in their tanker role means they can provide these added services alongside their primary refueling mission.

[…]

Source: KC-135 Refueling Pods Have Been Converted Into Flying Communication Nodes

Why “public AI”, built on open source software, is the way forward for the EU and how the EU enables it

A quarter of a century ago, I wrote a book called “Rebel Code”. It was the first – and is still the only – detailed history of the origins and rise of free software and open source, based on interviews with the gifted and generous hackers who took part. Back then, it was clear that open source represented a powerful alternative to the traditional proprietary approach to software development and distribution. But few could have predicted how completely open source would come to dominate computing. Alongside its role in running every aspect of the Internet, and powering most mobile phones in the form of Android, it has been embraced by startups for its unbeatable combination of power, reliability and low cost. It’s also a natural fit for cloud computing because of its ability to scale. It is no coincidence that for the last ten years, pretty much 100% of the world’s top 500 supercomputers have all run an operating system based on the open source Linux.

More recently, many leading AI systems have been released as open source. That raises the important question of what exactly “open source” means in the context of generative AI software, which involves much more than just code. The Open Source Initiative, which drew up the original definition of open source, has extended this work with its Open Source AI Definition. It is noteworthy that the EU has explicitly recognised the special role of open source in the field of AI. In the EU’s recent Artificial Intelligence Act, open source AI systems are exempt from the potentially onerous obligation to draw up a range of documentation that is generally required.

That could provide a major incentive for AI developers in the EU to take the open source route. European academic researchers working in this area are probably already doing that, not least for reasons of cost. Paul Keller points out in a blog post that another piece of EU legislation, the 2019 Copyright in the Digital Single Market Directive (CDSM), offers a further reason for research institutions to release their work as open source:

Article 3 of the CDSM Directive enables these institutions to text and data-mine all “works or other subject matter to which they have lawful access” for scientific research purposes. Text and data mining is understood to cover “any automated analytical technique aimed at analysing text and data in digital form in order to generate information, which includes but is not limited to patterns, trends and correlations,” which clearly covers the development of AI models (see here or, more recently, here).

Keller’s post goes through the details of how that feeds into AI research, but the end-result is the following:

as long as the model is made available in line with the public-interest research missions of the organisations undertaking the training (for example, by releasing the model, including its weights, under an open-source licence) and is not commercialised by these organisations, this also does not affect the status of the reproductions and extractions made during the training process.

This means that Article 3 does cover the full model-development pathway (from data acquisition to model publication under an open source license) that most non-commercial Public AI model developers pursue.

As that indicates, the use of open source licensing is critical to this application of Article 3 of EU copyright legislation for the purpose of AI research.

What’s noteworthy here is how two different pieces of EU legislation, passed some years apart, work together to create a special category of open source AI systems that avoid most of the legal problems of training AI systems on copyright materials, as well as the bureaucratic overhead imposed by the EU AI Act on commercial systems. Keller calls these “public AI”, which he defines as:

AI systems that are built by organizations acting in the public interest and that focus on creating public value rather than extracting as much value from the information commons as possible.

Public AI systems are important for at least two reasons. First, their mission is to serve the public interest, rather than focussing on profit maximisation. That’s obviously crucial at time when today’s AI giants are intent on making as much money as possible, presumably in the hope that they can do so before the AI bubble bursts.

Secondly, public AI systems provide a way for the EU to compete with both US and Chinese AI companies – by not competing with them. It is naive to think that Europe can ever match levels of venture capital investment that big name US AI startups currently enjoy, or that the EU is prepared and able to support local industries for as long and as deeply as the Chinese government evidently plans to do for its home-grown AI firms. But public AI systems, which are fully open source, and which take advantage of the EU right of research institutions to carry out text and data mining, offer a uniquely European take on generative AI that might even make such systems acceptable to those who worry about how they are built, and how they are used.

Source: Why “public AI”, built on open source software, is the way forward for the EU – Walled Culture

How Trademark Ruined Colorado-Style Pizza

You’ve heard of New York style, Chicago deep dish, Detroit square pans. But Colorado-style pizza? Probably not. And there’s a perfectly ridiculous reason why this regional style never spread beyond a handful of restaurants in the Rocky Mountains: one guy trademarked it and scared everyone else away from making it.

This story comes via a fascinating Sporkful podcast episode where reporter Paul Karolyi spent years investigating why Colorado-style pizza remains trapped in obscurity while other regional styles became national phenomena.

The whole episode is worth listening to for the detective work alone, but the trademark angle reveals something important about how intellectual property thinking can strangle cultural movements in their cradle.

Here’s the thing about pizza “styles”: they become styles precisely because they spread. New York, Chicago, Detroit, New Haven—these aren’t just individual restaurant concepts, they’re cultural phenomena adopted and adapted by hundreds of restaurants. That widespread adoption creates the network effects that make a “style” valuable: customers seek it out, restaurants compete to perfect it, food writers chronicle its evolution.

Colorado-style pizza never got that chance. When Karolyi dug into why, he discovered that Beau Jo’s—the restaurant credited with inventing the style—had locked it up legally. When he asked the owner’s daughter if other restaurants were making Colorado-style pizza, her response was telling:

We’re um a trademark, so they cannot.

Really?

Yes.

Beau owns a trademark for Colorado style pizza.

Yep.

When Karolyi finally tracked down the actual owner, Chip (after years of trying, which is its own fascinating subplot), he expected to hear about some grand strategic vision behind the trademark. Instead, he got a masterclass in reflexive IP hoarding:

Cuz it’s different and nobody else is doing that. So, why not do it Colorado style? I mean, there’s Chicago style and there’s Pittsburgh style and Detroit and everything else. Um, and we were doing something that was what was definitely different and um um licensing attorney said, “Yeah, we can do it” and we were able to.

That’s it. No business plan. No licensing strategy. Just “some lawyer said we can do it” so they did. This is the IP-industrial complex in microcosm: lawyers selling trademark applications because they can, not because they should.

I pressed my case to Chip that abandoning the trademark so others could also use it could actually be good for his business.

“If more places made Colorado style pizza, the style itself would become more famous, which would make more people come to Beau Jo’s to try the original. If imitation is the highest form of flattery, like everyone would know that Beau Jo was the originator. Like, do you ever worry or maybe do you think that the trademark has possibly hindered the spread of this style of pizza that you created that you should be getting credit for?”

“Never thought about it.”

“Well, what do you think about it now?”

“I don’t know. I have to think about that. It’s an interesting thought. I’ve never thought about it. I’m going to look into it. I’m going to look into it. I’m going to talk to some people and um I’m not totally opposed to it. I don’t know that it would be a good idea for us, but I’m willing to look at it.”

A few weeks later, Karolyi followed up with Chip. Predictably, the business advisors had circled the wagons. They “unanimously” told him not to give up the trademark—because of course they did. These are the same people who profit from maintaining artificial scarcity, even when it demonstrably hurts the very thing they’re supposedly protecting.

And so Colorado-style pizza remains trapped in its legal cage, known only to a handful of tourists who stumble across Beau Jo’s locations. A culinary innovation that could have sparked a movement instead became a cautionary tale about how IP maximalism kills the things it claims to protect.

This case perfectly illustrates the perverse incentives of modern IP thinking. We’ve created an entire industry of lawyers and consultants whose job is to convince business owners to “protect everything” on the off chance they might license it later. Never mind that this protection often destroys the very value they’re trying to capture.

The trademark didn’t just fail to help Beau Jo’s—it actively harmed them. As Karolyi documents in the podcast, the legal lockup has demonstrably scared off other restaurateurs from experimenting with Colorado-style pizza, ensuring the “style” remains a curiosity rather than a movement. Fewer competitors means less innovation, less media attention, and fewer customers seeking out “the original.” It’s a masterclass in how to turn potential network effects into network defects.

Compare this to the sriracha success story. David Tran of Huy Fong Foods deliberately avoided trademarking “sriracha” early on, allowing dozens of competitors to enter the market. The result? Sriracha became a cultural phenomenon, and Huy Fong’s distinctive rooster bottle became the most recognizable brand in a category they helped create. Even as IP lawyers kept circling, Tran understood what Chip apparently doesn’t:

“Everyone wants to jump in now,” said Tran, 70. “We have lawyers come and say ‘I can represent you and sue’ and I say ‘No. Let them do it.’” Tran is so proud of the condiment’s popularity that he maintains a daily ritual of searching the Internet for the latest Sriracha spinoff.

Sometimes the best way to protect your creation is to let it go. But decades of IP maximalist indoctrination have made this counterintuitive wisdom almost impossible to hear. Even when presented with a clear roadmap for how abandoning the trademark could grow his business, Chip couldn’t break free from the sunk-cost fallacy and his advisors’ self-interested counsel.

The real tragedy isn’t just that Colorado-style pizza remains obscure. It’s that this story plays out thousands of times across industries, with creators choosing artificial scarcity over organic growth, protection over proliferation. Every time someone trademarks a taco style or patents an obvious business method, they’re making the same mistake Chip made: confusing ownership with value creation.

Source: How Trademark Ruined Colorado-Style Pizza | Techdirt

Drones delivering life-saving defibrillators to 911 calls

[…] collaborative team of health experts, community organizations, and universities are in the middle of a pilot program using drones and automated external defibrillators (AEDs). Led by Duke Health and the Duke Clinical Research Institute, EMS responders are now deploying drones AEDs to certain 911 calls in Forsyth County, North Carolina.

Why is cardiac arrest so serious?

Over 350,000 people experience cardiac arrest every year in the United States. When that happens, time is crucial–and AEDs are key to saving lives. Each device includes external sensor pads that adhere to a patient’s chest to monitor their heart. At the appropriate time, the pads deliver a moderately high voltage shock (usually between 200 to 1,000 volts) to readjust and regulate the heartbeat. Modern AEDs are designed to be used with minimal experience, and often include a speaker in the central component to verbally give proper instructions.

Although 90 percent of patients survive if an AED is administered within the first minute, such a rapid response is often out of the question, unless a patient is already in a healthcare facility. The American Red Cross estimates over 70 percent of all cardiac arrests occur at home, with survival odds decreasing around 10 percent for every additional minute of delayed AED application. The national average for EMS response times is around seven minutes, but in rural areas the timeframe can often extend as long as 13 minutes.

Unlike an ambulance or firetruck, a lowflying drone isn’t beholden to traffic slowdowns or winding streets. Researchers like Monique Starks at the Duke University School of Medicine suspect that deploying drones in conjunction with EMS workers may offer opportunities to provide faster AED deliveries.

[…]

Importantly, the trial does not alter any existing 911 response protocols. When EMS is dispatched to the location, a pilot remotely deploys and guides a drone flying 200 feet above the ground to the same address. If it arrives before first responders, the drone descends to 100 feet and lowers the AED down via a winch strap. At that point, a 911 dispatcher can take a bystander step-by-step through using the device on the person in need.

[…]

Source: Drones are delivering life-saving defibrillators to 911 calls | Popular Science

NASA’s X-59 Quiet Supersonic Jet With No Forward Window Completes First Flight, Prepares for More Flight Testing

After years of design, development, and testing, NASA’s X-59 quiet supersonic research aircraft took to the skies for the first time Oct. 28, marking a historic moment for the field of aeronautics research and the agency’s Quesst mission.

The X-59, designed to fly at supersonic speeds and reduce the sound of loud sonic booms to quieter sonic thumps, took off at 11:14 a.m. EDT and flew for 67 minutes. The flight represents a major step toward quiet supersonic flight over land.

[…]

The X-59’s first flight went as planned, with the aircraft operating slower than the speed of sound at 230 mph and a maximum altitude of about 12,000 feet, conditions that allowed the team to conduct in-flight system and performance checks. As is typical for an experimental aircraft’s first flight, landing gear was kept down the entire time while the team focused on ensuring the aircraft’s airworthiness and safety.

The aircraft traveled north to Edwards Air Force Base, circled before landing, and taxied to its new home at NASA’s Armstrong Flight Research Center in Edwards, California, officially marking the transition from ground testing to flight operations.

[…]

The X-59 is the centerpiece of NASA’s Quesst mission and its first flight connects with the agency’s roots of flying bold, experimental aircraft.

“The X-59 is the first major, piloted X-plane NASA has built and flown in over 20 years – a unique, purpose-built aircraft,”

[…]

Getting off the ground was only the beginning for the X-59. The team is now preparing the aircraft for full flight testing, evaluating how it will handle and, eventually, how its design will shape shock waves, which typically result in a sonic boom, in supersonic flight. The X-59 will eventually reach its target cruising speed of about 925 mph (Mach 1.4) at 55,000 feet.

The aircraft’s design sits at the center of that testing, shaping and distributing shock-wave formation. Its engine is mounted on top of the fuselage – the main body of the aircraft – to redirect air flow upward and away from the ground.

The cockpit sits mid-fuselage, with no forward-facing window. Instead, NASA developed an eXternal Vision System – cameras and advanced high-definition displays that allow the pilot to see ahead and below the aircraft, which is particularly critical during landing.

These design choices reflect years of research and modeling – all focused on changing how the quieter sonic thump from a supersonic aircraft will be perceived by people on the ground.

[…]

Source: NASA’s X-59 Completes First Flight, Prepares for More Flight Testing – NASA

Summarising a Book is now Potentially Copyright Infringing

A federal judge just ruled that computer-generated summaries of novels are “very likely infringing,” which would effectively outlaw many book reports. That seems like a problem.

The Authors Guild has one of the many lawsuits against OpenAI, and law professor Matthew Sag has the details on a ruling in that case that, if left in place, could mean that any attempt to merely summarize any copyright covered work is now possibly infringing. You can read the ruling itself here.

This isn’t just about AI—it’s about fundamentally redefining what copyright protects. And once again, something that should be perfectly fine is being treated as an evil that must be punished, all because some new machine did it.

But, I guess elementary school kids can rejoice that they now have an excuse not to do a book report.

[…]

Sag highlights how it could have a much more dangerous impact beyond getting kids out of their homework: making much of Wikipedia infringing.

A new ruling in Authors Guild v. OpenAI has major implications for copyright law, well beyond artificial intelligence. On October 27, 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI’s motion to dismiss claims that ChatGPT outputs infringed the rights of authors such as George R.R. Martin and David Baldacci. The opinion suggests that short summaries of popular works of fiction are very likely infringing (unless fair use comes to the rescue).

This is a fundamental assault on the idea, expression, distinction as applied to works of fiction. It places thousands of Wikipedia entries in the copyright crosshairs and suggests that any kind of summary or analysis of a work of fiction is presumptively infringing.

Short summaries of copyright-covered works should not impact copyright in any way. Yes, as Sag points out, “fair use” can rescue in some cases, but the old saw remains that “fair use is just the right to hire a lawyer.” And when the process is the punishment, saying that fair use will save you in these cases is of little comfort. Getting a ruling on fair use will run you hundreds of thousands of dollars at least.

Copyright is supposed to stop the outright copying of the copyright-protected expression. A summary is not that. It should not implicate the copyright in any form, and it shouldn’t require fair use to come to the rescue.

Sag lays out the details of what happened in this case:

Judge Stein then went on to evaluate one of the more detailed chat-GPT generated summaries relating to A Game of Thrones, the 694 page novel by George R. R. Martin which eventually became the famous HBO series of the same name. Even though this was only a motion to dismiss, where the cards are stacked against the defendant, I was surprised by how easily the judge could conclude that:

“A more discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work, including because the summary conveys the overall tone and feel of the original work by parroting the plot, characters, and themes of the original.”

The judge described the ChatGPT summaries as:

“most certainly attempts at abridgment or condensation of some of the central copyrightable elements of the original works such as setting, plot, and characters”

He saw them as:

“conceptually similar to—although admittedly less detailed than—the plot summaries in Twin Peaks and in Penguin Random House LLC v. Colting, where the district court found that works that summarized in detail the plot, characters, and themes of original works were substantially similar to the original works.” (emphasis added).

To say that the less than 580-word GPT summary of A Game of Thrones is “less detailed” than the 128-page Welcome to Twin Peaks Guide in the Twin Peaks case, or the various children’s books based on famous works of literature in the Colting case, is a bit of an understatement.

[…]

As Sag makes clear, there are few people out there who would legitimately think that the Wikipedia summary should be deemed infringing, which is why this ruling is notable. It again highlights how lots of people, including the media, lawmakers, and now (apparently) judges, get so distracted by the “but this new machine is bad!” in looking at LLM technology that they seem to completely lose the plot.

And that’s dangerous for the future of speech in general. We shouldn’t be tossing out fundamental key concepts in speech (“you can summarize a work of art without fear”) just because some new kind of summarization tool exists.

Source: Book Reports Potentially Copyright Infringing, Thanks To Court Attacks On LLMs | Techdirt

Switzerland plans surveillance worse than US

In Switzerland, a country known for its love for secrecy, particularly when it comes to banking, the tides have turned: An update to the VÜPF surveillance law directly targets privacy and anonymity services such as VPNs as well as encrypted chat apps and email providers. Right now the law is still under discussion in the Swiss Bundesrat.

[…]

While Swiss privacy has been overhyped, legislative rules in Switzerland are currently decent and comparable to German data protection laws. This update to the VÜPF, which could come into force by 2026, would change data protection legislation in Switzerland dramatically.

Why the update is dangerous

If the law passes in its current form,

  • Swiss email and VPN providers with just 5,000 users are forced to log IP addresses and retain the data for six months – while data retention in Germany is illegal for email providers.
  • ID or driver’s license, maybe a phone number, are required for the registration process of various services – rendering the anonymous usage impossible.
  • Data must be delivered upon request in plain text, meaning providers must be able to decrypt user data on their end (except for end-to-end encrypted messages exchanged between users).

What is more, the law is not introduced by or via the Parliament, but instead the Swiss government, the Federal Council and the Federal Department of Justice and Police (FDJP), want to massively expand internet surveillance by updating the VÜPF – without Parliament having a say. This comes as a shock in a country proud of its direct democracy with regular people’s decisions on all kinds of laws. However, in 2016 the Swiss actually voted for more surveillance, so direct democracy might not help here.

History of surveillance in Switzerland

In 2016, Swiss Parliament updated its data retention law BÜPF to enforce data retention for all communication data (post, email, phone, text messages, ip addresses). In 2018, the revision of the VÜPF translated this into administrative obligations for ISPs, email providers, and others, with exceptions in regard to the size of the provider and whether they were classified as telecommunications service providers or communications services.

This led to the fact that services such as Threema and ProtonMail were exempt from some of the obligations that providers such as Swisscom, Salt, and Sunrise had to comply with – even though the Swiss government would have liked to classify them as quasi network operators and telecommunications providers as well. The currently discussed update of the VÜPF seems to directly target smaller providers as well as providers of anonymous services and VPNs.

The Swiss surveillance state has always sought a lot of power, and had to be called back by the Federal Supreme Court in the past to put surveillance on a sound legal basis.

But now, article 50a of the VÜPF reform mandates that providers must be able to remove “the encryption provided by them or on their behalf”, basically asking for backdoor access to encryption. However, end-to-end encrypted messages exchanged between users do not fall under this decryption obligation. Yet, even Swiss email provider Proton Mail says to Der Bund that “Swiss surveillance would be much stricter than in the USA and the EU, and Switzerland would lose its competitiveness as a business location.”

Because of this upcoming legal change in Switzerland, Proton has started to move its server from Switzerland to the EU.

Source: Switzerland plans surveillance worse than US | Tuta

Free Tool Adds Eye-Tracked Foveated Rendering To Many SteamVR Games

A free tool for Windows PCs with modern Nvidia GPUs adds eye-tracked foveated rendering to a huge number of SteamVR games.

Called PimaxMagic4All, the tool re-implements a feature Pimax ships in its Pimax Play software used to set up and adjust its headsets. As such, if you already own a Pimax headset, you don’t need it.

PimaxMagic4All should work with any SteamVR-compatible headset that exposes a low-level public API to retrieve eye tracking data, or which has third-party software that does so

[…]

The developer, by the way, is Matthieu Bucchianeri, a name you may recognize if you’re a regular UploadVR reader.

Bucchianeri is a very experienced developer, having worked on the PS4 and original PlayStation VR at Sony, Falcon 9 and Dragon at SpaceX, and HoloLens and Windows MR at Microsoft, where he currently works on Xbox. At Microsoft he contributed to OpenXR, and in his spare time he developed OpenXR Toolkit, VDXR (Virtual Desktop’s OpenXR runtime), and most recently Oasis, the native SteamVR driver that revived Windows MR headsets.

PimaxMagic4All has a simple graphical interface with three levels of foveated rendering: Maximum, Balanced, and Minimum. You can choose between prioritizing increasing performance, achieving a result where you shouldn’t notice the difference, or a balance of the two.

The tool can inject foveated rendering into any title that uses the DirectX 11 graphics API and OpenVR, Valve’s deprecated API for SteamVR. The game also needs to not have an anti-cheat system, since those will prevent code injection. And remember, you need to have an Nvidia graphics card, specifically a GTX 16 series or RTX card.

[…]

PimaxMagic4All is available on GitHub, where you’ll find both the source for the code added around Pimax’s core as well as compiled releases.

Source: Free Tool Adds Eye-Tracked Foveated Rendering To Many SteamVR Games

Planned Obsolescence: this is something the EU should care about

Manufacturers are making things to deliberately break just outside of warranty and also making it hard or impossible to repair components that should be easy to repair. The video below shows this clearly with washing machines.

As an appliance expert with over 40 years in the industry, I am exposing the undeniable evidence of planned obsolescence in modern domestic appliances from major brands like Bosch, Siemens, Hotpoint, AEG, Beko, Hoover, Indesit, and Zanussi. This isn’t just speculation as I use hard numbers and component costs to prove that manufacturers are designing machines to break just outside the warranty period, making them uneconomic to repair. That’s why we are fighting against Planned Obsolescence, and the main culprit is the Sealed Washing Machine Drum. Manufacturers are welding the two halves of the drum together, making it impossible to replace simple, affordable parts like the drum bearings or the spider. This isn’t poor design; it’s a calculated strategy to force you to buy a new machine, creating mountains of e-waste and putting honest repair businesses out of work.

Google ordered to pay $665 million for anticompetitive practices in Germany

Google may have to fork over 572 million euros, or nearly $665 million, to two German companies for “market abuse,” according to a recent ruling from a Berlin court. First reported by Reuters, the tech giant was ordered to pay approximately 465 million euros, or approximately $540 million, to Idealo and another 107 million euros, or roughly $124 million, to Producto, both of which are price comparison platforms based in Germany. According to the ruling, Google abused its dominant market position by favoring Google Shopping in its own search results.

Idealo pursued legal action against Google, claiming that the Alphabet subsidiary was “self-preferencing” its own platforms, which led to unfair market advantages that hindered competitors. The company first demanded at least 3.3 billion euros, or more than $3.8 billion, in damages in February 2025. To counter, Google said it made changes in 2017 that allowed competing shopping platforms the same opportunity as Google Shopping to display ads through Google Search.

Idealo said in a press release that it will continue the legal pressure on Google, claiming that “the amount awarded reflects only a fraction of the actual damage.” Albrecht von Sonntag, co-founder and member of Idealo’s advisory board, added in a press release that “abuse of dominance must have consequences and must not be a profitable business model that pays off despite fines and damages.”

It’s not the first time Google has found itself in legal trouble in Europe. Beyond Google Shopping, Google was accused of favoring its own Google Flights and Google Hotels in search results, leading the European Union to threaten massive fines for violating its Digital Markets Act. A month prior, the European Commission fined Google nearly 3 billion euros, or more than $3.4 billion, for its anticompetitive practices in the advertising tech industry.

Source: Google ordered to pay $665 million for anticompetitive practices in Germany

A federal jury ruled that Apple has to pay $634 million for infringing smartwatch patents

In a longstanding and complicated legal battle between Apple and Masimo, a recent ruling from a California jury may be the first step towards a certain conclusion. As reported by Reuters, a federal jury sided with Masimo, a medical tech company known for its patient monitoring devices, when it said that Apple infringed on the company’s patent for technology that tracks blood-oxygen levels.

The case revolves around whether Apple violated Masimo’s patent related to blood-oxygen sensors, which the jury claimed can be seen with the Apple Watch’s Workout and Heart Rate apps. According to Reuters, Apple disagreed with the verdict, adding that “the single patent in this case expired in 2022, and is specific to historic patient monitoring technology from decades ago.” The tech giant is reportedly planning to appeal the decision.

While there may be some closure with this California lawsuit, Apple and Masimo are entangled in a web of related but separate lawsuits. Masimo first accused Apple of infringing on its pulse oximeter patents, leading to Apple temporarily halting sales of its Series 9 and Ultra 2 smartwatches. In August, Apple redesigned its blood-oxygen monitoring feature and rolled it out to the Series 9, Series 10 and Ultra 2. The redesign was approved by the US Customs and Border Protection, but Masimo filed a suit against the agency for overstepping its authority by allowing the sale of these updated Apple Watches without input from Masimo.

Source: A federal jury ruled that Apple has to pay $634 million for infringing smartwatch patents

Roblox begins asking tens of millions of children to send it a selfie, for “age verification”.

Roblox is starting to roll out the mandatory age checks that will require all of its users to submit an ID or scan their face in order to access the platform’s chat features. The updated policy, which the company announced earlier this year, will be enforced first in Australia, New Zealand and the Netherlands and will expand to all other markets by early next year.

The company also detailed a new “age-based chat” system, which will limit users’ ability to interact with people outside of their age group. After verifying or estimating a user’s age, Roblox will assign them to an age group ranging from 9 years and younger to 21 years and older (there are six total age groups). Teens and children will then be limited from connecting with people that aren’t in or close to their estimated age group in in-game chats.

Unlike most social media apps which have a minimum age of 13, Roblox permits much younger children to use its platform. Since most children and many teens don’t have IDs, the company uses “age estimation” tech provided by identity company Persona. The checks, which use video selfies, are conducted within Roblox’s app and the company says that images of users’ faces are immediately deleted after completing the process.

[…]

Source: Roblox begins asking tens of millions of children to verify their age with a selfie

Deleted by Roblox itself, but also by Persona? Pretty scary, 1. having a database of all these kiddies faces and their online persona’s, ways of talking and typing, and 2. that even if the data is deleted, it could be intercepted as it is sent to Roblox and on to the verifier.

Google is collecting troves of data from downgraded Nest thermostats

Google officially turned off remote control functionality for early Nest Learning Thermostats last month, but it hasn’t stopped collecting a stream of data from these downgraded devices. After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more.

[…]

fter cloning Google’s API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. “On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive,” Kociemba tells The Verge.

[…]

Google is still getting all the information collected by Nest Learning Thermostats, including data measured by their sensors, such as temperature, humidity, ambient light, and motion. “I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street,” Kociemba says.

[…]

Source: Google is collecting troves of data from downgraded Nest thermostats | The Verge

A Simple WhatsApp Security Flaw Exposed 3.5 Billion Phone Numbers

Add someone’s phone number, and WhatsApp instantly shows whether they’re on the service, and often their profile picture and name, too.

Repeat that same trick a few billion times with every possible phone number, it turns out, and the same feature can also serve as a convenient way to obtain the cell number of virtually every WhatsApp user on earth—along with, in many cases, profile photos and text that identifies each of those users.

[…]

One group of Austrian researchers have now shown that they were able to use that simple method of checking every possible number in WhatsApp’s contact discovery to extract 3.5 billion users’ phone numbers from the messaging service. For about 57 percent of those users, they also found that they could access their profile photos, and for another 29 percent, the text on their profiles. Despite a previous warning about WhatsApp’s exposure of this data from a different researcher in 2017, they say, the service’s parent company, Meta, still failed to limit the speed or number of contact discovery requests the researchers could make by interacting with WhatsApp’s browser-based app, allowing them to check roughly a hundred million numbers an hour.

The result would be “the largest data leak in history, had it not been collated as part of a responsibly conducted research study,” as the researchers describe it in a paper documenting their findings.

[…]

Source: A Simple WhatsApp Security Flaw Exposed 3.5 Billion Phone Numbers | WIRED

Cloudflare down, half the internet goes with it. Just like Azure, Epic, AWS, etc. Cloud dependency isn’t nice, is it?

The company acknowledged problems at 1148 UTC on November 18, stating: “Some services may be intermittently impacted.” After a long half-hour, it reckoned systems were returning to normal, but “customers may continue to observe higher-than-normal error rates” as engineers continue to investigate and fix the underlying issue.

Cloudflare provides security and infrastructure for a substantial chunk of websites. As such, X (formerly Twitter) and even El Reg were either knocked offline or malfunctioned as the outage continued. Even that stalwart of system uptime, Downdetector, reported “Please unblock challenges.cloudflare.com to proceed” at one point.

Cloudflare has yet to confirm the cause of the outage – we will issue an update when it does – but it follows hot on the heels of problems at AWS and Azure, and is a reminder for enterprises that a service is only as good as the weakest link in the chain… and that weakest link might not reveal itself until it breaks.

The problem appears to be global, and the company was forced to do the equivalent of turning off and on its WARP access in London as engineers worked to deal with the glitch. WARP is similar to a VPN, except it routes traffic through Cloudflare’s network. If the network is having a bad day, turning off WARP seems a sensible option.

[…]

Source: Cloudflare coughs, half the internet catches a cold • The Register

F-22 Pilot Controls MQ-20 Drone From The Cockpit In Mock Combat Mission

An MQ-20 Avenger drone flew a mock mission at the direction of a pilot in an F-22 Raptor during a demonstration earlier this year, General Atomics has disclosed. The company says this is part of a larger effort to lay the groundwork for crewed-uncrewed teaming between F-22s and Collaborative Combat Aircraft (CCA) drones. General Atomics and Anduril are currently developing CCA designs for the U.S. Air Force, and that service expects the Raptor to be the first airborne controller for whichever types it decides to buy in the future.

[…]

“The [crewed-uncrewed teaming demonstration] effort integrated L3Harris’ BANSHEE Advanced Tactical Datalinks with its Pantera software-defined radios (SDRs) via Lockheed Martin’s open radio architectures, all integrated and shared from an F-22 Raptor,” according to a General Atomics press release. “Two L3Harris Software‑Defined Radios (SDRs) supported the demonstration. The first SDR was installed into the General Atomics MQ‑20 Avenger, and the second was integrated in the Lockheed Martin F‑22 Raptor.”

A composite image highlighting the integration of the BANSHEE datalink, at far lower left, and a Pantera-series radio, onto the Avenger drone. L3Harris

“Through the Pilot Vehicle Interface (PVI) tablet and the F‑22’s GRACE module, the system provided end‑to‑end communications, enabling the F‑22 command and control of the MQ‑20 in flight,” the release adds. “The collaborative demonstration showcased non-proprietary, U.S. government-owned communications capabilities and the ability to fly, transition, and re-fly flight hardware that is core to the Open Mission Systems and skills based unmanned autonomy ecosystem.”

The explicit mention of a tablet-based in-cockpit control interface is also worth highlighting. General Atomics and Lockheed Martin have both been working for years now on control systems to allow crewed aircraft to direct drones in flight, with tablet-like devices being the typical user interface. However, both companies have themselves raised questions to varying degrees about the long-term viability of that arrangement, especially for pilots in single-seat fighters, who already have substantial workloads during real-world missions.

“We started with [the Air Force’s] Air Combat Command with tablets … There was this idea that they wanted to have this discreet control,” Michael Atwood, vice president of Advanced Programs for General Atomics, said during an appearance on The Merge podcast last year. “I got to fly in one of these jets with a tablet. And it was really hard to fly the airplane, let alone the weapon system of my primary airplane, and spatially and temporally think about this other thing.”

[…]

 

Source: F-22 Pilot Controls MQ-20 Drone From The Cockpit In Mock Combat Mission

Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

The software in question, AppCloud, developed by the mobile analytics firm IronSource, has been embedded in devices sold primarily in the Middle East and North Africa (MENA) region.

Security researchers and privacy advocates warn that it quietly collects sensitive user data, fueling fears of surveillance in politically volatile areas.

AppCloud tracks users’ locations, app usage patterns, and device information without seeking ongoing consent after initial setup. Even more concerning, attempts to uninstall it often fail due to its deep integration into Samsung’s One UI operating system.

Reports indicate the app reactivates automatically following software updates or factory resets, making it virtually unremovable for average users. This has sparked outrage among consumers in countries such as Egypt, Saudi Arabia, and the UAE, where affordable Galaxy models are popular entry points into Android.

The issue came to light through investigations by SMEX, a Lebanon-based digital rights group focused on MENA privacy. In a recent report, SMEX highlighted how AppCloud’s persistence could enable third-party unauthorized data harvesting, posing significant risks in regions with histories of government overreach.

“This isn’t just bloatware, it’s a surveillance enabler baked into the hardware,” said a SMEX spokesperson. The group called on Samsung to issue a global patch and disclose the full scope of data shared with ironSource.

[…]

Source: Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

Copy-paste now exceeds file transfer as top corporate data exfiltration vector, as well as untrustable extensions and not using SSO/MFA

It is now more common for data to leave companies through copying and paste than through file transfers and uploads, LayerX revealed in its Browser Security Report 2025.

This shift is largely due to generative AI (genAI), with 77% of employees pasting data into AI prompts, and 32% of all copy-pastes from corporate accounts to non-corporate accounts occurring within genAI tools.

Note: below it also highlights copy pasta into instant messaging services. What it doesn’t highlight is that everything you paste into Chrome is fair game for Google as far as it’s terms and services are concerned.

“Traditional governance built for email, file-sharing, and sanctioned SaaS didn’t anticipate that copy/paste into a browser prompt would become the dominant leak vector,” LayerX CEO Or Eshed wrote in a blog post summarizing the report.

The report highlights data loss blind spots in the browser, from shadow SaaS to browser extension supply chain risks, and provides a checklist for CISOs and other security leaders to gain more control over browser activity.

GenAI now accounts for 11% of enterprise application usage, with adoption rising faster than many data loss protection (DLP) controls can keep up. Overall, 45% of employees actively use AI tools, with 67% of these tools being accessed via personal accounts and ChatGPT making up 92% of all use.

Corporate data makes its way to genAI tools through both copying and pasting — with 82% of these copy-pastes occurring via personal accounts — and through file uploads, with 40% of files uploaded to genAI tools containing either personally identifiable information (PII) or payment card information (PCI).

With the rise of AI-driven browsers such as OpenAI’s Atlas and Perplexity’s Comet, governance of AI tools’ access to corporate data becomes even more urgent, the LayerX report notes.

Tackling the growing use of AI tools in the workplace includes establishing allow- and block lists for AI tools and extensions, monitoring for shadow AI activity and restricting the sharing of sensitive data with AI models, LayerX said.

Monitoring clipboards and AI prompts for PII, and blocking risky copy-pastes and prompting actions, can also address this growing data loss vector beyond just focusing on file uploads and traditional vectors like email.

AI tools are not the only vector through which copied-and-pasted data escapes organizations. LayerX found that copy-pastes containing PII or PCI were most likely to be pasted into chat services, i.e. instant messaging (IM) or SMS apps, where 62% of pastes contained sensitive information. Of this data 87% went to non-corporate accounts.

In addition to copy-paste and file upload risks, the report also delved into the browser extension supply chain, revealing that 53% of employees install extensions with “high” or “critical” permissions. Additionally, 26% of installed extensions are side-loaded rather than being installed through official stores.

Browser extensions are often difficult to vet and poorly maintained, with 54% of extension developers identified only through a free webmail account such as Gmail and 51% of extensions not receiving any updates in over a year. Yet extensions can have access to key data and resources including cookies and user account details, making it critical for organizations to audit and monitor their use.

“Permission audit alone is insufficient. Continuously score developer reputation, update cadence, sideload sources, and AI/agent capabilities. Track changes like you track third-party libraries,” Eshed wrote.

Identity security within browsers was also noted to be a major blind spot for organizations, with 68% of logins to corporate accounts completed without single sign-on (SSO), making it difficult for organizations to properly track identities across apps. Additionally, 26% of enterprise users re-used passwords across accounts and 54% of corporate account passwords were noted to be of medium strength or below.

Source: Copy-paste now exceeds file transfer as top corporate data exfiltration vector | SC Media

Fortinet finally fixes critical straight to admin bug under active exploit for a month

Fortinet finally published a security advisory on Friday for a critical FortiWeb path traversal vulnerability under active exploitation – but it appears digital intruders got a month’s head start.

The bug, now tracked as CVE-2025-64446, allows unauthenticated attackers to execute administrative commands on Fortinet’s web application firewall product and fully take over vulnerable devices. It’s fully patched in FortiWeb version 8.0.2, but it didn’t even have a CVE assigned to it until Friday, when the vendor admitted to having “observed this to be exploited in the wild.”

[…]

it appears a proof-of-concept (PoC) exploit has been making the rounds since early October, and third-party security sleuths have told The Register that exploitation is widespread.

“The watchTowr team is seeing active, indiscriminate in-the-wild exploitation of what appears to be a silently patched vulnerability in Fortinet’s FortiWeb product,” watchTowr CEO and founder Benjamin Harris told us prior to Fortinet’s security advisory.

“The vulnerability allows attackers to perform actions as a privileged user – with in-the-wild exploitation focusing on adding a new administrator account as a basic persistence mechanism for the attackers,” he added.

WatchTowr successfully reproduced the vulnerability and created a working PoC, along with a Detection Artefact Generator to help defenders identify vulnerable hosts in their IT environments.

Despite the fix in version 8.0.2, the attacks remain ongoing, and at least 80,000 FortiWeb web app firewalls are connected to the internet, according to Harris.

“Apply patches if you haven’t already,” he advised. “That said, given the indiscriminate exploitation observed by the watchTowr team and our Attacker Eye sensor network, appliances that remain unpatched are likely already compromised.”

The battering attempts against Fortinet’s web application firewalls date back to October 6, when cyber deception firm Defused published a PoC on social media that one of their FortiWeb Manager honeypots caught. At the time, the bug hadn’t been disclosed nor did it have a CVE.

[…]

 

Source: Fortinet finally cops to critical bug under active exploit • The Register