The Linkielist

Linking ideas with the world

The Linkielist

Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Azure Security

An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is “grossly irresponsible” and mired in a “culture of toxic obfuscation.” The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were “negligent cybersecurity practices” that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure’s role in the mass breach.

On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a “critical” issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday’s disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.

“To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank,” Yoran wrote. “They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft.” He continued: “Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers’ networks and services? Of course not. They took more than 90 days to implement a partial fix — and only for new applications loaded in the service.” In response, Microsoft officials wrote: “We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption.” Microsoft went on to say that the initial fix in June “mitigated the issue for the majority of customers” and “no customer action is required.”

In a separate email, Yoran responded: “It now appears that it’s either fixed, or we are blocked from testing. We don’t know the fix, or mitigation, so hard to say if it’s truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn’t happen, so it’s a black box, which is also part of the problem. The ‘just trust us’ lacks credibility when you have the current track record.”

Source: Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Security – Slashdot

A great example of why a) closed source software is a really bad idea, b) why responsible disclosure is a good idea and c) why cloud is often a bad idea

Cult of Dead Cow hacktivists design distributed encryption system for mobile apps

Once known for distributing hacking tools and shaming software companies into improving their security, a famed group of technology activists is now working to develop a system that will allow the creation of messaging and social networking apps that won’t keep hold of users’ personal data.

The group, Cult of the Dead Cow, has developed a coding framework that can be used by app developers who are willing to embrace strong encryption and forsake revenue from advertising that is targeted to individuals based on detailed profiles gleaned from the data most apps now routinely collect.

The team is building on the work of such free products as Signal, which offers strong encryption for text messages and voice calls, and Tor, which offers anonymous web surfing by routing traffic through a series of servers to disguise the location of the person conducting the search.

The latest effort, to be detailed at the massive annual Def Con hacking conference in Las Vegas next week, seeks to provide a foundation for messaging, file sharing and even social networking apps without harvesting any data, all secured by the kind of end-to-end encryption that makes interception hard even for governments.

Called Veilid, and pronounced vay-lid, the code can be used by developers to build applications for mobile devices or the web. Those apps will pass fully encrypted content to one another using the Veilid protocol, its developers say. As with the file-sharing software BitTorrent, which distributes different pieces of the same content simultaneously, the network will get faster as more devices join and share the load, the developers say. In such decentralized “peer-to-peer” networks, users download data from each other instead of from a central machine.

As with some other open-source endeavors, the challenge will come in persuading programmers and engineers to devote time to designing apps that are compatible with Veilid. Though developers could charge money for those apps or sell ads, the potential revenue streams are limited by the inability to collect detailed information that has become a primary method for distributing targeted ads or pitching a product to a specific set of users.

The team behind Veilid has not yet released documentation explaining its design choices, and collaborative work on an initial messaging app, intended to function without requiring a phone number, has yet to produce a test version.

But the nascent project has other things going for it.

It arrives amid disarray, competition and a willingness to experiment among social network and chat users resentful of Twitter and Facebook. And it buttresses opposition to increasing moves by governments, lately including the United Kingdom, to undercut strong encryption with laws requiring disclosure on demand of content or user identities. Apple, Facebook parent Meta and Signal recently threatened to pull some UK services if that country’s Online Safety Bill is adopted unchanged.

Civil rights activists and abortion rights supporters have also been alarmed by police use of messages sent by text and Facebook Messenger to investigate abortions in states that have banned the procedure after the first six weeks of pregnancy.

“It’s great that people are developing an end-to-end encryption framework for everything,” said Cindy Cohn, executive director of the nonprofit Electronic Frontier Foundation. “We can move past the surveillance business model.”

Source: Cult of Dead Cow hacktivists design encryption system for mobile apps – The Washington Post

Android phones can now tell you if there’s an AirTag following you

When Google announced that trackers would be able to tie in to its 3 billion-device Bluetooth tracking network at its Google I/O 2023 conference, it also said that it would make it easier for people to avoid being tracked by trackers they don’t know about, like Apple AirTags.

Now Android users will soon get these “Unknown Tracker Alerts.” Based on the joint specification developed by Google and Apple, and incorporating feedback from tracker-makers like Tile and Chipolo, the alerts currently work only with AirTags, but Google says it will work with tag manufacturers to expand its coverage.

Android’s unknown tracker alerts, illustrated in moving Corporate Memphis style.

For now, if an AirTag you don’t own “is separated from its owner and determined to be traveling with you,” a notification will tell you this and that “the owner of the tracker can see its location.” Tapping the notification brings up a map tracing back to where it was first seen traveling with you. Google notes that this location data “is always encrypted and never shared with Google.”

Finally, Google offers a manual scan feature if you’re suspicious that your Android phone isn’t catching a tracker or want to see what’s nearby. The alerts are rolling out through a Google Play services update to devices on Android 6.0 and above over the coming weeks.

[…]

Source: Android phones can now tell you if there’s an AirTag following you

Firmware vulnerabilities in millions of servers could give hackers superuser status

[…] The vulnerabilities reside inside firmware that Duluth, Georgia-based AMI makes for BMCs (baseboard management controllers). These tiny computers soldered into the motherboard of servers allow cloud centers, and sometimes their customers, to streamline the remote management of vast fleets of computers. They enable administrators to remotely reinstall OSes, install and uninstall apps, and control just about every other aspect of the system—even when it’s turned off. BMCs provide what’s known in the industry as “lights-out” system management.

[…]

These vulnerabilities range in severity from High to Critical, including unauthenticated remote code execution and unauthorized device access with superuser permissions. They can be exploited by remote attackers having access to Redfish remote management interfaces, or from a compromised host operating system. Redfish is the successor to traditional IPMI and provides an API standard for the management of a server’s infrastructure and other infrastructure supporting modern data centers. Redfish is supported by virtually all major server and infrastructure vendors, as well as the OpenBMC firmware project often used in modern hyperscale environments.

[…]

The researchers went on to note that if they could locate the vulnerabilities and write exploits after analyzing the publicly available source code, there’s nothing stopping malicious actors from doing the same. And even without access to the source code, the vulnerabilities could still be identified by decompiling BMC firmware images. There’s no indication malicious parties have done so, but there’s also no way to know they haven’t.

The researchers privately notified AMI of the vulnerabilities, and the company created firmware patches, which are available to customers through a restricted support page. AMI has also published an advisory here.

The vulnerabilities are:

  • CVE-2023-34329, an authentication bypass via HTTP headers that has a severity rating of 9.9 out of 10, and
  • CVE-2023-34330, Code injection via Dynamic Redfish Extension. Its severity rating is 8.2.

[…]

“By spoofing certain HTTP headers, an attacker can trick BMC into believing that external communication is coming in from the USB0 internal interface,” the researchers wrote. “When this is combined on a system shipped with the No Auth option configured, the attacker can bypass authentication, and perform Redfish API actions.”

One example would be to create an account that poses as a legitimate administrator and has all system rights afforded one.

CVE-2023-34330, meanwhile, can be exploited on systems with the no auth setting to effectively execute code of their choice. In the event the no auth option isn’t enabled, the attackers first must have BMC credentials. That’s a higher bar but by no means out of reach for sophisticated actors.

[…]

Source: Firmware vulnerabilities in millions of computers could give hackers superuser status | Ars Technica

Google Urges Gmail Users to Enable ‘Enhanced Safe Browsing’ for Faster, More Proactive Protection – but also takes screenshots of your browsing habits

The Washington Post’s “Tech Friend” newsletter has the latest on Google’s “Enhanced Safe Browsing” for Chrome and Gmail, which “monitors the web addresses of sites that you visit and compares them to constantly updated Google databases of suspected scam sites.” You’ll see a red warning screen if Google believes you’re on a website that is, for example, impersonating your bank. You can also check when you’re downloading a file to see if Google believes it might be a scam document. In the normal mode without Enhanced Safe Browsing, Google still does many of those same security checks. But the company might miss some of the rapid-fire activity of crooks who can create a fresh bogus website minutes after another one is blocked as a scam.

This enhanced security feature has been around for three years, but Google recently started putting a message in Gmail inboxes suggesting that people turn on Enhanced Safe Browsing.

Security experts told me that it’s a good idea to turn on this safety feature but that it comes with trade-offs. The company already knows plenty about you, particularly when you’re logged into Gmail, YouTube, Chrome or other Google services. If you turn on Enhanced Safe Browsing, Google may know even more about what sites you’re visiting even if you’re not signed into a Google account. It also collects bits of visual images from sites you’re visiting to scan for hallmarks of scam sites.

Google said it will only use this information to stop bad guys and train its computers to improve security for you and everyone else. You should make the call whether you are willing to give up some of your privacy for extra security protections from common crimes.
Gmail users can toggle the feature on or off at this URL. Google tells users that enabling the feature will provide “faster and more proactive protection against dangerous websites, downloads, and extensions.”

The Post’s reporter also asked Google why it doesn’t just enable the extra security automatically, and “The company told me that because Google is collecting more data in Enhanced Safe Browsing mode, it wants to ask your permission.”

The Post adds as an aside that “It’s also not your fault that phishing scams are everywhere. Our whole online security system is unsafe and stupid… Our goal should be to slowly replace the broken online security system with newer technologies that ditch our crime-prone password system for different methods of verifying we are who we say we are.”

Source: Google Urges Gmail Users to Enable ‘Enhanced Safe Browsing’ for Faster, More Proactive Protection – Slashdot

TETRA Military and Police Radio Code Encryption Has a Flaw: A built in Backdoor

For more than 25 years, a technology used for critical data and voice radio communications around the world has been shrouded in secrecy to prevent anyone from closely scrutinizing its security properties for vulnerabilities

[…]

The backdoor, known for years by vendors that sold the technology but not necessarily by customers, exists in an encryption algorithm baked into radios sold for commercial use in critical infrastructure. It’s used to transmit encrypted data and commands in pipelines, railways, the electric grid, mass transit, and freight trains. It would allow someone to snoop on communications to learn how a system works, then potentially send commands to the radios that could trigger blackouts, halt gas pipeline flows, or reroute trains.

Researchers found a second vulnerability in a different part of the same radio technology that is used in more specialized systems sold exclusively to police forces, prison personnel, military, intelligence agencies, and emergency services, such as the C2000 communication system used by Dutch police, fire brigades, ambulance services, and Ministry of Defense for mission-critical voice and data communications. The flaw would let someone decrypt encrypted voice and data communications and send fraudulent messages to spread misinformation or redirect personnel and forces during critical times.

[…]

The Dutch National Cyber Security Centre assumed the responsibility of notifying radio vendors and computer emergency response teams around the world about the problems, and of coordinating a timeframe for when the researchers should publicly disclose the issues.

In a brief email, NCSC spokesperson Miral Scheffer called TETRA “a crucial foundation for mission-critical communication in the Netherlands and around the world” and emphasized the need for such communications to always be reliable and secure, “especially during crisis situations.” She confirmed the vulnerabilities would let an attacker in the vicinity of impacted radios “intercept, manipulate or disturb” communications and said the NCSC had informed various organizations and governments, including Germany, Denmark, Belgium, and England, advising them how to proceed.

[…]

The researchers plan to present their findings next month at the BlackHat security conference in Las Vegas, when they will release detailed technical analysis as well as the secret TETRA encryption algorithms that have been unavailable to the public until now. They hope others with more expertise will dig into the algorithms to see if they can find other issues.

[…]

Although the standard itself is publicly available for review, the encryption algorithms are only available with a signed NDA to trusted parties, such as radio manufacturers. The vendors have to include protections in their products to make it difficult for anyone to extract the algorithms and analyze them.

[…]

Source: TETRA Radio Code Encryption Has a Flaw: A Backdoor | WIRED

AMD ‘Zenbleed’ bug allows Meltdown-like data leakage

AMD has started issuing some patches for its processors affected by a serious silicon-level bug dubbed Zenbleed that can be exploited by rogue users and malware to steal passwords, cryptographic keys, and other secrets from software running on a vulnerable system.

Zenbleed affects Ryzen and Epyc Zen 2 chips, and can be abused to swipe information at a rate of at least 30Kb per core per second. That’s practical enough for someone on a shared server, such as a cloud-hosted box, to spy on other tenants. Exploiting Zenbleed involves abusing speculative execution, though unlike the related Spectre family of design flaws, the bug is pretty easy to exploit. It is more on a par with Meltdown.

Malware already running on a system, or a rogue logged-in user, can exploit Zenbleed without any special privileges and inspect data as it is being processed by applications and the operating system, which can include sensitive secrets, such as passwords. It’s understood a malicious webpage, running some carefully crafted JavaScript, could quietly exploit Zenbleed on a personal computer to snoop on this information.

The vulnerability was highlighted today by Google infosec guru Tavis Ormandy, who discovered the data-leaking vulnerability while fuzzing hardware for flaws, and reported it to AMD in May. Ormandy, who acknowledged some of his colleagues for their help in investigating the security hole, said AMD intends to address the flaw with microcode upgrades, and urged users to “please update” their vulnerable machines as soon as they are able to.

Proof-of-concept exploit code, produced by Ormandy, is available here, and we’ve confirmed it works on a Zen 2 Epyc server system when running on the bare metal. While the exploit runs, it shows off the sensitive data being processed by the box, which can appear in fragments or in whole depending on the code running at the time.

If you stick any emulation layer in between, such as Qemu, then the exploit understandably fails.

What’s hit?

The bug affects all AMD Zen 2 processors including the following series: Ryzen 3000; Ryzen Pro 3000; Ryzen Threadripper 3000; Ryzen 4000 Pro; Ryzen 4000, 5000, and 7020 with Radeon Graphics; and Epyc Rome datacenter processors.

AMD today issued a security advisory here, using the identifiers AMD-SB-7008 and CVE-2023-20593 to track the vulnerability. The chip giant scored the flaw as a medium severity one, describing it as a “cross-process information leak.”

A microcode patch for Epyc 7002 processors is available now. As for the rest of its affected silicon: AMD is targeting December 2023 for updates for desktop systems (eg, Ryzen 3000 and Ryzen 4000 with Radeon); October for high-end desktops (eg, Threadripper 3000); November and December for workstations (eg, Threadripper Pro 3000); and November to December for mobile (laptop-grade) Ryzens. Shared systems are the priority, it would seem, which makes sense given the nature of the design blunder.

[…]

Source: AMD ‘Zenbleed’ bug allows Meltdown-like data leakage

VanMoof ebike should be bricked if servers go down – fortunately security is so bad a rival has an app to allow you to unlock it

[…] an app is required to use many of the smart features of its bikes – and that app relies on communication with VanMoof servers. If the company goes under, and the servers go offline, that could leave ebike owners unable to even unlock their bikes

[…]

While unlocking is activated by Bluetooth when your phone comes into range of the bike, it relies on a rolling key code – and that function in turn relies on access to a VanMoof server. If the company goes bust, then no server, no key code generation, no unlock.

Rival ebike company Cowboy has a solution

A rival ebike company, Belgian company Cowboy, has stepped in to offer a solution. TNW reports that it has created an app which allows VanMoof owners to generate and save their own digital key, which can be used in place of one created by a VanMoof server.

If you have a VanMoof bike, grab the app now, as it requires an initial connection to the VanMoof server to fetch your current keycode. If the server goes offline, existing Bikey App users can continue to unlock their bikes, but it will no longer be possible for new users to activate it.

[…]

In some cases, a companion app may work perfectly well in standalone mode, but it’s surprising how often a server connection is required to access the full feature set.

[…]

Perhaps we need standards here. For example, requiring all functionality (bar firmware updates) to work without access to an external server.

Where this isn’t technically possible, perhaps there should be a legal requirement for essential software to be automatically open-sourced in the event of bankruptcy, so that there would be the option of techier owners banding together to host and maintain the server-side code?

[…]

Source: VanMoof ebike mess highlights a risk with pricey smart hardware

Yup, there are too many examples of good hardware being turned into junk because the OEM goes bankrupt or just decides to stop supporting it. Something needs to be done about this.

Brave to stop websites from port scanning visitors – wait that hasn’t been done by everyone yet?!

The Brave browser will take action against websites that snoop on visitors by scanning their open Internet ports or accessing other network resources that can expose personal information.

Starting in version 1.54, Brave will automatically block website port scanning, a practice that a surprisingly large number of sites were found engaging in a few years ago. According to this list compiled in 2021 by a researcher who goes by the handle G666g1e, 744 websites scanned visitors’ ports, most or all without providing notice or seeking permission in advance. eBay, Chick-fil-A, Best Buy, Kroger, and Macy’s were among the offending websites.

Some sites use similar tactics in an attempt to fingerprint visitors so they can be re-identified each time they return, even if they delete browser cookies. By running scripts that access local resources on the visiting devices, the sites can detect unique patterns in a visiting browser. Sometimes there are benign reasons a site will access local resources, such as detecting insecurities or allowing developers to test their websites. Often, however, there are more abusive or malicious motives involved.

The new version of Brave will curb the practice. By default, no website will be able to access local resources. More advanced users who want a particular site to have such access can add it to an allow list.

[…]

Brave will continue to use filter list rules to block scripts and sites known to abuse localhost resources. Additionally, the browser will include an allow list that gives the green light to sites known to access localhost resources for user-benefiting reasons.

“Brave has chosen to implement the localhost permission in this multistep way for several reasons,” developers of the browser wrote. “Most importantly, we expect that abuse of localhost resources is far more common than user-benefiting cases, and we want to avoid presenting users with permission dialogs for requests we expect will only cause harm.”

The scanning of ports and other activities that access local resources is typically done using JavaScript that’s hosted on the website and runs inside a visitor’s browser. A core web security principle known as the same origin policy bars JavaScript hosted by one Internet domain from accessing the data or resources of a different domain. This prevents malicious Site A from being able to obtain credentials or other personal data associated with Site B.

The same origin policy, however, doesn’t prevent websites from interacting in some ways with a visitor’s localhost IP address of 127.0.0.1.

[…]

“As far as we can tell, Brave is the only browser that will block requests to localhost resources from both secure and insecure public sites, while still maintaining a compatibility path for sites that users trust (in the form of the discussed localhost permission)” the Brave post said.

[…]

Source: Brave aims to curb practice of websites that port scan visitors | Ars Technica

This should not be a possibility!

JP Morgan “accidentally” deletes 47 million comms records related to Chase bank

JP Morgan has been fined $4 million by America’s securities watchdog, the SEC, for deleting millions of email records dating from 2018 relating to its Chase Bank subsidiary.

The financial services giant apparently deleted somewhere in the region of 47 million electronic communications records from about 8,700 electronic mailboxes covering the period January 1 through to April 23, 2018.

Many of these, it turns out, were business records that were required to be retained under the Securities Exchange Act of 1934, the SEC said in a filing [PDF] detailing its findings.

Worse still, the screwup meant that it couldn’t produce evidence that that the SEC and others subpoenaed in their investigations. “In at least 12 civil securities-related regulatory investigations, eight of which were conducted by the Commission staff, JPMorgan received subpoenas and document requests for communications which could not be retrieved or produced because they had been deleted permanently,” the SEC says.

What went wrong?

The trouble for JP Morgan can be traced to a project where the company aimed to delete from its systems any older communications and documents that were no longer required to be retained.

According to the SEC’s summary, the project experienced “glitches,” with those documents identified for deletion failing to be deleted under the processes implemented by JPMorgan.

[…]

Source: JP Morgan accidentally deletes 47 million comms records • The Register

Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor for updates

[…] Researchers at firmware-focused cybersecurity company Eclypsium revealed today that they’ve discovered a hidden mechanism in the firmware of motherboards sold by the Taiwanese manufacturer Gigabyte,

[…]

the hidden code is meant to be an innocuous tool to keep the motherboard’s firmware updated, researchers found that it’s implemented insecurely, potentially allowing the mechanism to be hijacked and used to install malware instead of Gigabyte’s intended program. And because the updater program is triggered from the computer’s firmware, outside its operating system, it’s tough for users to remove or even discover.

[…]

In its blog post about the research, Eclypsium lists 271 models of Gigabyte motherboards that researchers say are affected.

[…]

Gigabyte’s updater alone might have raised concerns for users who don’t trust Gigabyte to silently install code on their machine with a nearly invisible tool—or who worry that Gigabyte’s mechanism could be exploited by hackers who compromise the motherboard manufacturer to exploit its hidden access in a software supply chain attack. But Eclypsium also found that the update mechanism was implemented with glaring vulnerabilities that could allow it to be hijacked: It downloads code to the user’s machine without properly authenticating it, sometimes even over an unprotected HTTP connection, rather than HTTPS. This would allow the installation source to be spoofed by a man-in-the-middle attack carried out by anyone who can intercept the user’s internet connection, such as a rogue Wi-Fi network.

In other cases, the updater installed by the mechanism in Gigabyte’s firmware is configured to be downloaded from a local network-attached storage device (NAS), a feature that appears to be designed for business networks to administer updates without all of their machines reaching out to the internet. But Eclypsium warns that in those cases, a malicious actor on the same network could spoof the location of the NAS to invisibly install their own malware instead.

[…]

Source: Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor | WIRED

Fake scientific papers are alarmingly common and becoming more so

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.

[…]

Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.

Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.

[…]

To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.

Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.

[…]

STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.

[…]

Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.

The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.

[…]

Source: Fake scientific papers are alarmingly common | Science | AAAS

A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.

WhatsApp, Signal Threaten to Leave UK Over ‘Online Safety Bill’ – which wants big brother reading all your messages. So online snooping bill, really.

Meta’s WhatsApp is threatening to leave the UK if the government passes the Online Safety Bill, saying it will essentially eliminate its encryption methods. Alongside its rival company Signal and five other apps, the company said that, by passing the bill, users will no longer be protected by end-to-end encryption, which ensures no one but the recipient has access to sent messages.

The “Online Safety Bill” was originally proposed to criminalize content encouraging self-harm posted to social media platforms like Facebook, Instagram, TikTok, and YouTube, but was amended to more broadly focus on illegal content related to adult and child safety. Although government officials said the bill would not ban end-to-end encryption, the messaging apps said in an open letter, “The bill provides no explicit protection for encryption.”

It continues, “If implemented as written, could empower OFCOM [the Office of Communications] to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.”

[…]

“In short, the bill poses an unprecedented threat to the privacy, safety, and security of every UK citizen and the people with whom they communicate around the world while emboldening hostile governments who may seek to draft copycat laws.”

Signal said in a Twitter post that it will “not back down on providing private, safe communications,” as the open letter urges the UK government to reconsider the way the bill is currently laid out. Both companies have stood by their arguments, stating they will discontinue the apps in the UK rather than risk weakening their current encryption standards.

[…]

Source: WhatsApp, Signal Threaten to Leave UK Over ‘Online Safety Bill’

International Partners Publish Secure-by-Design and -Default Principles and Approaches   Guide – but don’t link to guide in press release

 The Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), and the cybersecurity authorities of Australia, Canada, United Kingdom, Germany, Netherlands, and New Zealand (CERT NZ, NCSC-NZ

) published today “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.” This joint guidance urges software manufacturers to take urgent steps necessary to ship products that are secure-by-design and -default.  To create a future where technology and associated products are safe for customers, the authoring agencies urge manufacturers to revamp their design and development programs to permit only secure-by-design and -default products to be shipped to customers.

This guidance, the first of its kind, is intended to catalyze progress toward further investments and cultural shifts necessary to achieve a safe and secure future. In addition to specific technical recommendations, this guidance outlines several core principles to guide software manufacturers in building software security into their design processes prior to developing, configuring, and shipping their products, including:

  • Take ownership of the security outcomes of their technology products, shifting the burden of security from the customers. A secure configuration should be the default baseline, in which products automatically enable the most important security controls needed to protect enterprises from malicious cyber actors.
  • Embrace radical transparency and accountability—for example, by ensuring vulnerability advisories and associated common vulnerability and exposure (CVE) records are complete and accurate.
  • Build the right organizational structure by providing executive level commitment for software manufacturers to prioritize security as a critical element of product development.

[…]

With this joint guide, the authoring agencies seek to progress an international conversation about key priorities, investments, and decisions necessary to achieve a future where technology is safe, secure, and resilient by design and default. Feedback on this guide is welcome and can be sent to SecureByDesign@cisa.dhs.gov.

Source: U.S. and International Partners Publish Secure-by-Design and -Default Principles and Approaches    | CISA

Not having the guide linked in the press release means people have to search for it, which means it’s a great target for an attack. Not really secure at all!

So I have the link to the PDF guide, it’s here.

Disabling Intel and AMD’s Backdoors On Modern computers

Despite some companies making strides with ARM, for the most part, the desktop and laptop space is still dominated by x86 machines. For all their advantages, they have a glaring flaw for anyone concerned with privacy or security in the form of a hardware backdoor that can access virtually any part of the computer even with the power off. AMD calls their system the Platform Security Processor (PSP) and Intel’s is known as the Intel Management Engine (IME).

To fully disable these co-processors a computer from before 2008 is required, but if you need more modern hardware than that which still respects your privacy and security concerns you’ll need to either buy an ARM device, or disable the IME like NovaCustom has managed to do with their NS51 series laptop.

NovaCustom specializes in building custom laptops with customizations for various components and specifications to fit their needs, including options for the CPU, GPU, RAM, storage, keyboard layout, and other considerations. They favor Coreboot as a bootloader which already goes a long way to eliminating proprietary closed-source software at a fundamental level, but not all Coreboot machines have the IME completely disabled. There are two ways to do this, the HECI method which is better than nothing but not fully trusted, and the HAP bit, which completely disables the IME. NovaCustom is using the HAP bit approach to disable the IME, meaning that although it’s not completely eliminated from the computer, it is turned off in a way that’s at least good enough for computers that the NSA uses.

There are a lot of new computer manufacturers building conscientious hardware nowadays, but (with the notable exception of System76) the IME and PSP seem to be largely ignored by most computing companies we’d otherwise expect to care about an option like this. It’s certainly still an area of concern considering how much power the IME and PSP are given over their host computers, and we have seen even mainline manufacturers sometimes offer systems with the IME disabled. The only other options to solve this problem are based around specific motherboards for 8th and 9th generation Intel desktops, or you can go way back to hardware from 2008 and install libreboot to eliminate, rather than disable, the IME.

Source: Disabling Intel’s Backdoors On Modern Laptops | Hackaday

Google debuts deps.dev API to check security status of dependencies

[…]

On Tuesday, Google – which has answered the government’s call to secure the software supply chain with initiatives like the Open Source Vulnerabilities (OSV) database and Software Bills of Materials (SBOMs) – announced an open source software vetting service, its deps.dev API.

The API, accessible in a more limited form via the web, aims to provide software developers with access to security metadata on millions of code libraries, packages, modules, and crates.

By security metadata, Google means things like: how well maintained a library is, who maintains it, what vulnerabilities are known to be present in it and whether they have been fixed, whether it’s had a code review, whether it’s using old or new versions of other dependencies, what license covers it, and so on. For example, see the info on the Go package cmdr and the Rust Cargo crate crossbeam-utils.

The API also provides at least two capabilities not available through the web interface: the ability to query the hash of a file’s contents (to find all package versions with the file) and dependency graphs based on actual installation rather than just declarations.

“Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack,” said Jesper Sarnesjo and Nicky Ringland, with Google’s open source security team, in a blog post. “The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers.”

[…]

The deps.dev API indexes data from various software package registries, including Rust’s Cargo, Go, Maven, JavaScript’s npm, and Python’s PyPI, and combines that with data gathered from GitHub, GitLab, and Bitbucket, as well as security advisories from OSV. The idea is to make metadata about software packages more accessible, to promote more informed security decisions.

Developers can query the API to look up a dependency’s records, with the returned data available programmatically to CI/CD systems, IDE plugins that present the information, build tools and policy engines, and other development tools.

Sarnesjo and Ringland say they hope the API helps developers understand dependency data better so that they can respond to – or prevent – attacks that try to compromise the software supply chain.

There are already hundreds of software supply chain tools and projects, but the more the merrier. Judging by the average life expectancy of Google services, the deps.dev API should be available for at least four years.

Along similar lines, Google Cloud on Wednesday nudged its Assured Open Source Software (Assured OSS) service for Java and Python into general availability.

[…]

Source: Google debuts API to check security status of dependencies • The Register

Google’s free Assured Open Source Software service hits GA

About a year ago, Google announced its Assured Open Source Software (Assured OSS) service, a service that helps developers defend against supply chain security attacks by regularly scanning and analyzing for vulnerabilities some of the world’s most popular software libraries. Today, Google is launching Assured OSS into general availability with support for well over a thousand Java and Python packages — and while Google didn’t initially disclose pricing when it first announced the service, the company has now revealed that it will be available for free.

Software development has long depended on third-party libraries (which are often maintained by only a single developer), but it wasn’t until the industry got hit with a number of high-profile exploits that everyone (including the White House) perked up and started taking software supply chain security seriously. Now, you can’t attend an open source conference without hearing about Software Bills of Materials (SBOMs), artifact registries and similar topics

[…]

Google promises that it will constantly keep these libraries up to date (without creating forks) and continuously scan for known vulnerabilities, do fuzz tests to discover new ones and then fix these issues and contribute these fixes back upstream. The company notes that when it first launched the service with around 250 Java libraries, it was responsible for discovering 48% of the new CVEs for these libraries and subsequently addressing them.

[…]

By partnering with a trusted supplier, organizations can mitigate these risks and ensure the integrity of their software supply chain to better protect their business applications.”

Developers and organizations that want to use the new service can sign up here and then integrate Assured OSS into their existing development pipeline.

Source: Google’s free Assured Open Source Software service hits GA | TechCrunch

 

Google announces GUAC open source project on software supply chains

Google unveiled a new open source security project on Thursday centered around software supply chain management.

Given the acronym GUAC – which stands for Graph for Understanding Artifact Composition – the project is focused on creating sets of data about a software’s build, security and dependency.

Google worked with Purdue University, Citibank and supply chain security company Kusari on GUAC, a free tool built to bring together many different sources of software security metadata. Google has also assembled a group of technical advisory members to help with the project — including IBM, Intel, Anchore and more.

Google’s Brandon Lum, Mihai Maruseac, Isaac Hepworth pitched the effort as one way to help address the explosion in software supply chain attacks — most notably the widespread Log4j vulnerability that is still leaving organizations across the world exposed to attacks.

“GUAC addresses a need created by the burgeoning efforts across the ecosystem to generate software build, security, and dependency metadata,” they wrote in a blog post. “GUAC is meant to democratize the availability of this security information by making it freely accessible and useful for every organization, not just those with enterprise-scale security and IT funding.”

They noted that U.S. President Joe Biden issued an executive order last year that said all federal government agencies must send a Software Bill of Materials (SBOM) to Allan Friedman, the director Cybersecurity Initiatives at National Telecommunications and Information Administration (NIST).

[…]

While SBOMs are becoming increasingly common thanks to the work of several tech industry groups like OpenSSF, there have been a number of complaints, one of those centered around the difficulty of sorting through troves of metadata, some of which is not useful.

Maruseac, Lum and Hepworth explained that it is difficult to combine and collate the kind of information found in many SBOMs.

“The documents are scattered across different databases and producers, are attached to different ecosystem entities, and cannot be easily aggregated to answer higher-level questions about an organization’s software assets,” they said.

Google shared a proof of concept of the project, which allows users to search data sets of software metadata.

The three explained that GUAC effectively aggregates software security metadata into a database and makes it searchable.

They used the example of a CISO or compliance officer that needs to understand the “blast radius” of a vulnerability. GUAC would allow them to “trace the relationship between a component and everything else in the portfolio.”

Google says the tool will allow anyone to figure out the most used critical components in their software supply chain ecosystem, the security weak points and any risky dependencies.

[…]

Source: Google announces GUAC open source project on software supply chains

Microsoft’s new Security Copilot will help network admins respond to threats in minutes, not day

[…]

with Microsoft’s unveiling of the new Security Copilot AI at its inaugural Microsoft Secure event. The automated enterprise-grade security system is powered by OpenAI’s GPT-4, runs on the Azure infrastructure and promises admins the ability “to move at the speed and scale of AI.”

Security Copilot is similar to the large language model (LLM) that drives the Bing Copilot feature, but with a training geared heavily towards network security rather than general conversational knowledge and web search optimization. […]

“Just since the pandemic, we’ve seen an incredible proliferation [in corporate hacking incidents],”Jakkal told Bloomberg. For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access.”

[…]

Jakkal anticipates these new capabilities enabling Copilot-assisted admins to respond within minutes to emerging security threats, rather than days or weeks after the exploit is discovered. Being a brand new, untested AI system, Security Copilot is not meant to operate fully autonomously, a human admin needs to remain in the loop. “This is going to be a learning system,” she said. “It’s also a paradigm shift: Now humans become the verifiers, and AI is giving us the data.”

To more fully protect the sensitive trade secrets and internal business documents Security Copilot is designed to protect, Microsoft has also committed to never use its customers data to train future Copilot iterations. Users will also be able to dictate their privacy settings and decide how much of their data (or the insights gleaned from it) will be shared. The company has not revealed if, or when, such security features will become available for individual users as well.

Source: Microsoft’s new Security Copilot will help network admins respond to threats in minutes, not days | Engadget

GitHub.com rotates its exposed private SSH key

GitHub has rotated its private SSH key for GitHub.com after the secret was was accidentally published in a public GitHub repository.

The software development and version control service says, the private RSA key was only “briefly” exposed, but that it took action out of “an abundance of caution.”

Unclear window of exposure

In a succinct blog post published today, GitHub acknowledged discovering this week that the RSA SSH private key for GitHub.com had been ephemerally exposed in a public GitHub repository.

“We immediately acted to contain the exposure and began investigating to understand the root cause and impact,” writes Mike Hanley, GitHub’s Chief Security Officer and SVP of Engineering.

“We have now completed the key replacement, and users will see the change propagate over the next thirty minutes. Some users may have noticed that the new key was briefly present beginning around 02:30 UTC during preparations for this change.”

The timing of the discovery is interesting—just weeks after GitHub rolled out secrets scanning for all public repos.

GitHub.com’s latest public key fingerprints are shown below. These can be used to validate that your SSH connection to GitHub’s servers is indeed secure.

As some may notice, only GitHub.com’s RSA SSH key has been impacted and replaced. No change is required for ECDSA or Ed25519 users.

SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s (RSA)
SHA256:br9IjFspm1vxR3iA35FWE+4VTyz1hYVLIE2t1/CeyWQ (DSA – deprecated)
SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM (ECDSA)
SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU (Ed25519)

“Please note that this issue was not the result of a compromise of any GitHub systems or customer information,” says GitHub.

“Instead, the exposure was the result of what we believe to be an inadvertent publishing of private information.”

The blog post, however, does not answer when exactly was the key exposed, and for how long, making the timeline of exposure a bit murky. Such timestamps can typically be ascertained from security logs—should these be available, and Git commit history.

[…]

Source: GitHub.com rotates its exposed private SSH key

Planting Undetectable Backdoors in Machine Learning Models

[…]

We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.•First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input—a property we call non-replicability.•Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

[…]

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Source: Planting Undetectable Backdoors in Machine Learning Models : [Extended Abstract] | IEEE Conference Publication | IEEE Xplore

Whistleblowers Take Note: Don’t Trust Cropping Tools – you can often uncrop them

[…] It is, in fact, possible to uncrop images and documents across a variety of work-related computer apps. Among the suites that include the ability are Google Workspace, Microsoft Office, and Adobe Acrobat.

Being able to uncrop images and documents poses risks for sources who may be under the impression that cropped materials don’t contain the original uncropped content.

One of the hazards lies in the fact that, for some of the programs, downstream crop reversals are possible for viewers or readers of the document, not just the file’s creators or editors. Official instruction manuals, help pages, and promotional materials may mention that cropping is reversible, but this documentation at times fails to note that these operations are reversible by any viewers of a given image or document.

For instance, while Google’s help page mentions that a cropped image may be reset to its original form, the instructions are addressed to the document owner. “If you want to undo the changes you’ve made to your photo,” the help page says, “reset an image back to its original photo.” The page doesn’t specify that if a reader is viewing a Google Doc someone else created and wants to undo the changes the editor made to a photo, the reader, too, can reset the image without having edit permissions for the document.

For users with viewer-only access permissions, right-clicking on an image doesn’t yield the option to “reset image.” In this situation, however, all one has to do is right-click on the image, select copy, and then paste the image into a new Google Doc. Right-clicking the pasted image in the new document will allow the reader to select “reset image.” (I’ve put together an example to show how the crop reversal works in this case.)

[…]

Uncropped versions of images can be preserved not just in Office apps, but also in a file’s own metadata. A photograph taken with a modern digital camera contains all types of metadata. Many image files record text-based metadata such as the camera make and model or the GPS coordinates at which the image was captured. Some photos also include binary data such as a thumbnail version of the original photo that may persist in the file’s metadata even after the photo has been edited in an image editor.

Images and photos are not the only digital files susceptible to uncropping: Some digital documents may also be uncropped. While Adobe Acrobat has a page-cropping tool, the instructions point out that “information is merely hidden, not discarded.” By manually setting the margins to zero, it is possible to restore previously cropped areas in a PDF file.

[…]

Images and documents should be thoroughly stripped of metadata using tools such as ExifTool and Dangerzone. Additionally, sensitive materials should not be edited through online tools, as the potential always exists for original copies of the uploaded materials to be preserved and revealed.

[…]

 

Source: Whistleblowers Take Note: Don’t Trust Cropping Tools

DNA Diagnostics Center DCC Forgot About 2.1m Clients’ Data, Leaked It

A prominent DNA testing firm has settled a pair of lawsuits with the attorney generals of Pennsylvania and Ohio after a 2021 episode that saw cybercriminals steal data on 2.1 million people, including the social security numbers of 45,000 customers from both states. As a result of the lawsuits, the company in question, DNA Diagnostics Center (or DDC), will have to pay out a cumulative $400,000 to both governments and has also agreed to beef up its digital security practices. The company said it didn’t even know it had the data that was stolen because it was stored in an old database.

On its website, DDC calls itself the “world leader in private DNA testing,” and boasts of its lab director’s affiliation with a number of high-profile criminal cases, including the OJ Simpson trial and the Anna Nicole Smith paternity case. The company also claims that it is the “media’s primary source for answers to DNA testing questions” and that it’s considered the “premier laboratory to perform DNA testing for TV shows and radio programs.” While that may all sound very impressive, there’s definitely one thing DDC isn’t the “world leader” in—cybersecurity practices. Prior to the recent lawsuits, it doesn’t really sound like the company had any.

Evidence of the hacking episode first surfaced in May of 2021, when DDC’s managed service provider reached out via automated notification to inform the firm of unusual activity on its network. Unfortunately, DDC didn’t do much with that information. Instead, it waited several months before the MSP reached out yet again—this time to inform it that there was now evidence of Cobalt Strike on its network.

Cobalt Strike is a popular penetration testing tool that has frequently been co-opted by criminals to further penetrate already compromised networks. Unexpectedly finding it on your network is never a good sign. By the time DDC officially responded to its MSP’s warnings, a hacker had managed to steal data connected to 2.1 million people who had been genetically tested in the U.S., including the social security numbers of 45,000 customers from both Ohio and Pennsylvania.

The Register reports that the stolen data was part of a “legacy database” that DDC had amassed years ago and then apparently forgot that it had. In 2012, DDC had purchased another forensics firm, Orchid Cellmark, accumulating the firm’s databases along with the sale. DDC has subsequently claimed that it was unaware that the data was even in its systems, alleging that a prior inventory of its digital vaults turned up no sign of the information of millions of people that was later boosted by the hacker.

[…]

Source: DNA Diagnostics Center Forgot About Clients’ Data, Leaked It

It Took Months For Anker To Finally Admit Its Eufy Cameras Weren’t Really Secure

Last November, The Verge discovered that Anker, the maker of popular USB chargers and the Eufy line of “smart” cameras, had a bit of a security issue. Despite the fact the company advertised its Eufy cameras as having “end-to-end” military-grade encryption, security researcher Paul Moore and a hacker named Wasabi found it was pretty easy to intercept user video streams.

The researchers found that an attacker simply needed a device serial number to connect to a unique address at Eufy’s cloud servers using the free VLC Media Player, giving them access to purportedly private video feeds. When approached by The Verge, Anker apparently thought the best approach was to simply lie and insist none of this was possible, despite repeated demonstrations that it was very possible:

When we asked Anker point-blank to confirm or deny that, the company categorically denied it. “I can confirm that it is not possible to start a stream and watch live footage using a third-party player such as VLC,” Brett White, a senior PR manager at Anker, told me via email.

Not only that, Anker apparently thought it would be a good idea to purge its website of all of its past promises related to privacy, thinking this would somehow cause folks to forget they’d misled their customers on proper end to end encryption. It didn’t.

It took several months, but The Verge kept pressing Anker to come clean, and only this week did the company finally decide to do so:

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted — they can and did produce unencrypted video streams for Eufy’s web portal, like the ones we accessed from across the United States using an ordinary media player.

But Anker says that’s now largely fixed. Every video stream request originating from Eufy’s web portal will now be end-to-end encrypted — like they are with Eufy’s app — and the company says it’s updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

I don’t know why anybody in tech PR in 2023 would think the best response to a privacy scandal is to lie, pretend nothing happened, and then purge your company’s website of past promises. Perhaps that works in some industries, but when you’re selling products to techies with very specific security promises attached, it’s just idiotic, and kudos to The Verge for relentlessly calling Anker out for it.

Source: It Took Months For Anker To Finally Admit Its Eufy Cameras Weren’t Really Secure | Techdirt

European Police Arrest 42 After Cracking another Covert comms App: Exclu

European police arrested 42 suspects and seized guns, drugs and millions in cash, after cracking another encrypted online messaging service used by criminals, Dutch law enforcement said Friday.

Police launched raids on 79 premises in Belgium, Germany and the Netherlands following an investigation that started back in September 2020 and led to the shutting down of the covert Exclu Messenger service.

Exclu is just the latest encrypted online chat service to be unlocked by law enforcement. In 2021 investigators broke into Sky ECC — another “secure” app used by criminal gangs.

After police and prosecutors got into the Exclu secret communications system, they were able to read the messages passed between criminals for five months before the raids, said Dutch police.

[…]

The police raids uncovered at least two drugs labs, one cocaine-processing facility, several kilogrammes of drugs, four million euros ($4.3 million) in cash, luxury goods and guns, Dutch police said.

Used by around 3,000 people, including around 750 Dutch speakers, Exclu was installed on smartphones with a licence to operate costing 800 euros for six months.

[…]

Source: European Police Arrest 42 After Cracking Covert App | Barron’s

This goes to show again – don’t do your own encyrption!