A security researcher has detailed how an artificial intelligence company in possession of nearly 2.6 million medical records allowed them to be publicly visible on the internet. It’s a clear reminder that our personal health data is not safe.
As Secure Thoughts reports, on July 7 security researcher Jeremiah Fowler discovered two folders of medical records available for anyone to access on the internet. The data was labeled as “staging data” and hosted by artificial intelligence company Cense AI, which specializes in “SaaS-based intelligent process automation management solutions.” Fowler believes the data was made public because Cense AI was temporarily hosting it online before loading it into the company’s management system or an AI bot.
The medical records are quite detailed and include names, insurance records, medical diagnosis notes, and payment records. It looks as though the data was sourced from insurance companies and relates to car accident claims and referrals for neck and spine injuries. The majority of the personal information is thought to be for individuals located in New York, with a total of 2,594,261 records exposed.
Researchers have demonstrated that they can make a working 3D-printed copy of a key just by listening to how the key sounds when inserted into a lock. And you don’t need a fancy mic — a smartphone or smart doorbell will do nicely if you can get it close enough to the lock.
The next time you unlock your front door, it might be worth trying to insert your key as quietly as possible; researchers have discovered that the sound of your key being inserted into the lock gives attackers all they need to make a working copy of your front door key.
It sounds unlikely, but security researchers say they have proven that the series of audible, metallic clicks made as a key penetrates a lock can now be deciphered by signal processing software to reveal the precise shape of the sequence of ridges on the key’s shaft. Knowing this (the actual cut of your key), a working copy of it can then be three-dimensionally (3D) printed.
How Soundarya Ramesh and her team accomplished this is a fascinating read.
Once they have a key-insertion audio file, SpiKey’s inference software gets to work filtering the signal to reveal the strong, metallic clicks as key ridges hit the lock’s pins [and you can hear those filtered clicks online here]. These clicks are vital to the inference analysis: the time between them allows the SpiKey software to compute the key’s inter-ridge distances and what locksmiths call the “bitting depth” of those ridges: basically, how deeply they cut into the key shaft, or where they plateau out. If a key is inserted at a nonconstant speed, the analysis can be ruined, but the software can compensate for small speed variations.
The result of all this is that SpiKey software outputs the three most likely key designs that will fit the lock used in the audio file, reducing the potential search space from 330,000 keys to just three. “Given that the profile of the key is publicly available for commonly used [pin-tumbler lock] keys, we can 3D-print the keys for the inferred bitting codes, one of which will unlock the door,” says Ramesh.
Jail phone telco Securus provided recordings of protected attorney-client conversations to cops and prosecutors, it is claimed, just three months after it settled a near-identical lawsuit.
The corporate giant controls all telecommunications between the outside world and prisoners in American jails that contract with it. It charges far above market rate, often more than 100 times, while doing so.
It has now been sued by three defense lawyers in Maine, who accuse the corporation of recording hundreds of conversations between them and their clients – something that is illegal in the US state. It then supplied those recordings to jail administrators and officers of the law, the attorneys allege.
Though police officers can request copies of convicts’ calls to investigate crimes, the cops aren’t supposed to get attorney-client-privileged conversations. In fact, these chats shouldn’t be recorded in the first place. Yet, it is claimed, Securus not only made and retained copies of these sensitive calls, it handed them to investigators and prosecutors.
“Securus failed to screen out attorney-client privileged calls, and then illegally intercepted these calls and distributed them to jail administrators who are often law enforcers,” the lawsuit [PDF] alleged. “In some cases the recordings have been shared with district attorneys.”
The lawsuit claims that over 800 calls covering 150 inmates and 30 law firms have been illegally recorded in the past 12 months, and it provides a (redacted) spreadsheet of all relevant calls.
[…]
Amazingly, this is not the first time Securus has been accused of this same sort of behavior. Just three months ago, in May this year, the company settled a similar class-action lawsuit this time covering jails in California.
That time, two former prisoners and a criminal defense attorney sued Securus after it recorded more than 14,000 legally protected conversations between inmates and their legal eagles. Those recordings only came to light after someone hacked the corp’s network and found some 70 million stored conversations, which were subsequently leaked to journalists.
[…]
Securus has repeatedly come under fire for similar complaints of ethical and technological failings. It was at the center of a huge row over location data after it was revealed it was selling location data on people’s phones to the police through a web portal.
The telecoms giant was also criticized for charging huge rates for video calls, between $5.95 and $7.99 for a 20-minute call, at a jail where the warden banned in-person visits but still required relatives to travel to the jail and sit in a trailer in the prison’s parking lot to talk to their loved ones through a screen.
Securus is privately held so it doesn’t make its financial figures public. A leak in 2014 revealed that it made a $115m profit on $405m in revenue for that year.
Android may have started with the mantra that developers are allowed to do anything as long as they can code it, but things have changed over the years as security and privacy became higher priorities. Every major update over the last decade has shuttered features or added restrictions in the name of protecting users, but some sacrifices may not have been entirely necessary. Another Android 11 trade-off has emerged, this time taking away the ability for users to select third-party camera apps to take pictures or videos on behalf of other apps, forcing users to rely only on the built-in camera app.
At the heart of this change is one of the defining traits of Android: the Intent system. Let’s say you need to take a picture of a novelty coffee mug to sell through an auction app. Since the auction app wasn’t built for photography, the developer chose to leave that up to a proper camera app. This where the Intent system comes into play. Developers simply create a request with a few criteria and Android will prompt users to pick from a list of installed apps to do the job.
Camera picker on Android 10.
However, things are going to change with Android 11 for apps that ask for photos or videos. Three specific intents will cease to work like they used to, including: VIDEO_CAPTURE, IMAGE_CAPTURE, and IMAGE_CAPTURE_SECURE. Android 11 will now automatically provide the pre-installed camera app to perform these actions without ever searching for other apps to fill the role.
Starting in Android 11, only pre-installed system camera apps can respond to the following intent actions:
If more than one pre-installed system camera app is available, the system presents a dialog for the user to select an app. If you want your app to use a specific third-party camera app to capture images or videos on its behalf, you can make these intents explicit by setting a package name or component for the intent.
Google describes the change in a list of new behaviors in Android 11, and further confirmed it in the Issue Tracker. Privacy and security are cited as the reason, but there’s no discussion about what exactly made those intents dangerous. Perhaps some users were tricked into setting a malicious camera app as the default and then using it to capture things that should have remained private.
“… we believe it’s the right trade-off to protect the privacy and security of our users.” — Google Issue Tracker.
Not only does Android 11 take the liberty of automatically launching the pre-installed camera app when requested, it also prevents app developers from conveniently providing their own interface to simulate the same functionality. I ran a test with some simple code to query for the camera apps on a phone, then ran it on devices running Android 10 and 11 with the same set of camera apps installed. Android 10 gave back a full set of apps, but Android 11 reported nothing, not even Google’s own pre-installed Camera app.
Above: Debugger view on Android 10. Below: Same view on Android 11.
As Mark Murphy of CommonsWare points out, Google does prescribe a workaround for developers, although it’s not very useful. The documentation advises explicitly checking for installed camera apps by their package names — meaning developers would have to pick preferred apps up front — and sending users to those apps directly. Of course, there are other ways to get options without identifying all package names, like getting a list of all apps and then manually searching for intent filters, but this seems like an over-complication.
In a step closer to skyscrapers that serve as power sources, a team led by University of Michigan researchers has set a new efficiency record for color-neutral, transparent solar cells.
The team achieved 8.1% efficiency and 43.3% transparency with an organic, or carbon-based, design rather than conventional silicon. While the cells have a slight green tint, they are much more like the gray of sunglasses and automobile windows.
“Windows, which are on the face of every building, are an ideal location for organic solar cells because they offer something silicon can’t, which is a combination of very high efficiency and very high visible transparency,” said Stephen Forrest, the Peter A. Franken Distinguished University Professor of Engineering and Paul G. Goebel Professor of Engineering, who led the research.
Yongxi Li holds up vials containing the polymers used to make the transparent solar cells. Image credit: Robert Coelius, Michigan Engineering Communications & Marketing
Buildings with glass facades typically have a coating on them that reflects and absorbs some of the light, both in the visible and infrared parts of the spectrum, to reduce the brightness and heating inside the building. Rather than throwing that energy away, transparent solar panels could use it to take a bite out of the building’s electricity needs. The transparency of some existing windows is similar to the transparency of the solar cells Forrest’s group reports in the journal Proceedings of the National Academy of Sciences.
[…]
The color-neutral version of the device was made with an indium tin oxide electrode. A silver electrode improved the efficiency to 10.8%, with 45.8% transparency. However, that version’s slightly greenish tint may not be acceptable in some window applications.
Transparent solar cells are measured by their light utilization efficiency, which describes how much energy from the light hitting the window is available either as electricity or as transmitted light on the interior side. Previous transparent solar cells have light utilization efficiencies of roughly 2-3%, but the indium tin oxide cell is rated at 3.5% and the silver version has a light utilization efficiency of 5%.
Both versions can be manufactured at large scale, using materials that are less toxic than other transparent solar cells. The transparent organic solar cells can also be customized for local latitudes, taking advantage of the fact that they are most efficient when the sun’s rays are hitting them at a perpendicular angle. They can be placed in between the panes of double-glazed windows..
Boffins testing the security of OpenPGP and S/MIME, two end-to-end encryption schemes for email, recently found multiple vulnerabilities in the way email client software deals with certificates and key exchange mechanisms.
They found that five out of 18 OpenPGP-capable email clients and six out of 18 S/MIME-capable clients are vulnerable to at least one attack.
These flaws are not due to cryptographic weaknesses. Rather they arise from the complexity of email infrastructure, based on dozens of standards documents, as it has evolved over time, and the impact that’s had on the way affected email clients handle certificates and digital signatures.
In a paper [PDF] titled “Mailto: Me Your Secrets. On Bugs and Features in Email End-to-End Encryption,” presented earlier this summer at the virtual IEEE Conference on Communications and Network Security, Jens Müller, Marcus Brinkmann, and Joerg Schwenk (Ruhr University Bochum, Germany) and Damian Poddebniak and Sebastian Schinzel (Münster University of Applied Sciences, Germany) reveal how they were able to conduct key replacement, MITM decryption, and key exfiltration attacks on various email clients.
“We show practical attacks against both encryption schemes in the context of email,” the paper explains.
“First, we present a design flaw in the key update mechanism, allowing a third party to deploy a new key to the communication partners. Second, we show how email clients can be tricked into acting as an oracle for decryption or signing by exploiting their functionality to auto-save drafts. Third, we demonstrate how to exfiltrate the private key, based on proprietary mailto parameters implemented by various email clients.”
This is not the sort of thing anyone trying to communicate securely over email wants because it means encrypted messages may be readable by an attacker and credentials could be stolen.
Müller offers a visual demonstration via Twitter on Tuesday:
Have you ever heard of the mailto:?attach=~/… parameter? It allows to include arbitrary files on disk. So, why break PGP if you can politely ask the victim’s mail client to include the private key? (1/4) pic.twitter.com/7ub9dJZJaO
The research led to CVEs for GNOME Evolution (CVE-2020-11879), KDE KMail (CVE-2020-11880), and IBM/HCL Notes (CVE-2020-4089). There are two more CVEs (CVE-2020-12618, and CVE-2020-12619) that haven’t been made public.
According to Müller, affected vendors were notified of the vulnerabilities in February.
Pegasus Mail is said to be affected though it doesn’t have a designated CVE – it may be that one of the unidentified CVEs applies here.
Thunderbird versions 52 and 60 for Debian/Kali Linux were affected but more recent versions are supposed to be immune since the email client’s developers fixed the applicable flaw last year. It allowed a website to present a link with the "mailto?attach=..." parameter to force Thunderbird to attach local files, like an SSH private key, to an outgoing message.
However, those who have installed the xdg-utils package, a set of utility scripts that provide a way to launch an email application in response to a mailto: link, appear to have reactivated this particular bug, which has yet to be fixed in xdg-utils.
Zoombombers today disrupted a court hearing involving the Florida teen accused of masterminding a takeover of high-profile Twitter accounts, forcing the judge to stop the hearing. “During the hearing, the judge and attorneys were interrupted several times with people shouting racial slurs, playing music, and showing pornographic images,” ABC Action News in Tampa Bay wrote. A Pornhub video forced the judge to temporarily shut down the hearing.
The Zoombombing occurred today when the Thirteenth Judicial Circuit Court of Florida in Tampa held a bail hearing for Graham Clark, who previously pleaded not guilty and is reportedly being held on $725,000 bail. Clark faces 30 felony charges related to the July 15 Twitter attack in which accounts of famous people like Elon Musk, Bill Gates, Jeff Bezos, and Joe Biden were hijacked and used to push cryptocurrency scams. Hackers also accessed direct messages for 36 high-profile account holders.
Today, Judge Christopher Nash ruled against a request to lower Clark’s bail amount. But before that, the judge “shut down the hearing for a short time” when arguments were interrupted by “pornography… foul language and rap music,” Fox 13 reporter Gloria Gomez wrote on Twitter.
“I’m removing people as quickly as I can whenever a disruption happens,” Nash said after one Zoombomber interrupted a lawyer. A not-safe-for-work portion of the hearing was posted by a Twitter user here. The first 47 seconds are safe to watch and include Nash’s comment about removing Zoombombers, but the rest of the video includes the Pornhub clip that caused Nash to shut down the hearing.
There were still problems after the hearing resumed, the Tampa Bay Times wrote:
Hoping a brief pause would filter out the interrupters, Nash reopened the meeting. But users who disguised their names as CNN and BBC News resumed their interruptions.
Nash was ultimately able to rule, declining to lower the bail amount. He did, however, remove a requirement that Clark prove the legitimacy of his assets. Lawyers have said he has $3 million in Bitcoin under his control.
“Predictably, the Zoom hearing for the 17-year-old alleged Twitter hacker in Fla. was bombed multiple times, with the final bombing of a pornhub clip ending the zoom portion of the proceedings,” security reporter Brian Krebs wrote on Twitter. “How the judge in charge of the proceeding didn’t think to enable settings that would prevent people from taking over the screen is beyond me. My guess is he didn’t know he could.”
Nash said that he’ll require a password next time, according to WFLA reporter Ryan Hughes.
Epic Games has filed yet another lawsuit against Apple. The Fortnite developer is now suing the Cupertino-based company for allegedly retaliating against it for its other lawsuit last week. Apple has not only removed the game from the App Store but has told Epic that it will “terminate” all its developer accounts and “cut Epic off from iOS and Mac development tools” on August 28th.
Apple removed Fortnite from the App Store and has informed Epic that on Friday, August 28 Apple will terminate all our developer accounts and cut Epic off from iOS and Mac development tools. We are asking the court to stop this retaliation. Details here: https://t.co/3br1EHmyd8
According to the filing, Epic claims that Fortnite’s removal from the App Store in conjunction with the termination of the developer accounts will likely result in “irreparable harm” to Epic. The company adds that cutting off access to development tools also affects software like Unreal Engine Epic, which it offers to third-party developers and which Apple itself has never claimed to have violated any policy. Without access to the tools, the company states that it can’t develop future versions of Unreal Engine for iOS or macOS.
“Not content simply to remove Fortnite from the App Store, Apple is attacking Epic’s entire business in unrelated areas,” the lawsuit states. “Left unchecked, Apple’s actions will irreparably damage Epic’s reputation among Fortnite users and be catastrophic for the future of the separate Unreal Engine business.”
The lawsuit mentions that Apple sent Epic a letter that threatened to stop “engineering efforts to improve hardware and software performance of Unreal Engine on Mac and iOS hardware […] and adoption and support of ARKit features and future VR features into Unreal Engine by their XR team.” The latter could be alluding to future Apple AR and VR projects.
Epic says that the preliminary injunctive relief is necessary to prevent its business from being crushed before the case even goes to judgement. The proposed preliminary injunction would restrain Apple from removing and de-listing Fortnite (which the company has already done) and would prevent it from taking actions against Epic’s other titles as well as Unreal Engine.
The conflict erupted last week when Epic began offering Fortnite discounts to users who bypassed Android and iOS app stores, thus working around the 30 percent cut. Apple then removed the game from its store for violating its policies, which then prompted Epic to file a lawsuit against it. The same thing occurred with Google — Android pulled the game from its app store and Epic filed suit against Google. Epic has also posted a parody of Apple’s 1984 ad which ends with a #FreeFortnite hashtag.
Babel Street is a shadowy organization that offers a product called Locate X that is reportedly used to gather anonymized location data from a host of popular apps that users have unwittingly installed on their phones. When we say “unwittingly,” we mean that not everyone is aware that random innocuous apps are often bundling and anonymizing their data to be sold off to the highest bidder.
Back in March, Protocol reported that U.S. Customs and Border Protection had a contract to use Locate X and that sources inside the secretive company described the system’s capabilities as allowing a user “to draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled, going back months.”
Protocol’s sources also said that the Secret Service had used the Locate X system in the course of investing a large credit card skimming operation. On Monday, Motherboard confirmed the investigation when it published an internal Secret Service document it acquired through a Freedom of Information Act (FOIA) request. (You can view the full document here.)
The document covers a relationship between Secret Service and Babel Street from September 28, 2017, to September 27, 2018. In the past, the Secret Service has reportedly used a seperate social media surveillance product from Babel Street, and the newly-released document totals fees paid after the addition of the Locate X license as $1,999,394.
[…]
Based on Fourth Amendment protections, law enforcement typically has to get a warrant or court order to seek to obtain Americans’ location data. In 2018, the Supreme Court ruled that cops still need a warrant to gather cellphone location data from network providers. And while law enforcement can obtain a warrant for specific cases as it seeks to view location data from a specific region of interest at a specific time, the Locate X system saves government agencies the time of going through judicial review with a next-best-thing approach.
The data brokerage industry benefits from the confusion that the public has about what information is collected and shared by various private companies that are perfectly within their legal rights. You can debate whether it’s acceptable for private companies to sell this data to each other for the purpose of making profits. But when this kind of sale is made to the U.S. government, it’s hard to argue that these practices aren’t, at least, violating the spirit of our constitutional rights.
Edward Snowden has brought in a health $1.25m in speaking fees ever since he jumped on a plane to Hong Kong with a treasure trove of NSA secrets, a new court filing [PDF] has revealed.
The whistleblower, who exposed mass surveillance of American citizens and foreigners by the US government by handing over top-secret documents to journalists before escaping to Moscow, earns an average of $18,745 per engagement. And Uncle Sam wants it – all of it.
The Feds subpoenaed Snowden’s booking agent, American Program Bureau, based in Massachusetts, insisting on a full rundown of engagements it had booked him for. The prosecution has added the list of 67 speeches, complete with fees and clients, to its lawsuit seeking to strip Snowden of any money earned through his actions.
[…]
With the monetary value of Snowden’s speaking tours now laid out of the table, it’s hard not to imagine that Donald Trump doesn’t have a figure in mind.
The US government has already won the right to claim all royalties from Snowden’s book and speeches after a district court awarded it all proceeds. The lawyers are now trying to figure out what those sums are.
Snowden has refused formal requests to provide all relevant information about his earnings, resulting in a magistrate deciding that the government can effectively decide what he had earned. His publisher agreed to hand over royalties from his book, although not the advance it paid him to write it.
Nearly 60 years ago, the Nobel prize–winning physicist Eugene Wigner captured one of the many oddities of quantum mechanics in a thought experiment. He imagined a friend of his, sealed in a lab, measuring a particle such as an atom while Wigner stood outside. Quantum mechanics famously allows particles to occupy many locations at once—a so-called superposition—but the friend’s observation “collapses” the particle to just one spot. Yet for Wigner, the superposition remains: The collapse occurs only when he makes a measurement sometime later. Worse, Wigner also sees the friend in a superposition. Their experiences directly conflict.
Now, researchers in Australia and Taiwan offer perhaps the sharpest demonstration that Wigner’s paradox is real. In a study published this week in Nature Physics, they transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans. Whereas Wigner believed resolving the paradox requires quantum mechanics to break down for large systems such as human observers, some of the new study’s authors believe something just as fundamental is on thin ice: objectivity. It could mean there is no such thing as an absolute fact, one that is as true for me as it is for you.
[…]
in 2018, Richard Healey, a philosopher of physics at the University of Arizona, pointed out a loophole in Brukner’s thought experiment, which Tischer and her colleagues have now closed. In their new scenario they make four assumptions. One is that the results the friends obtain are real: They can be combined with other measurements to form a shared corpus of knowledge. They also assume quantum mechanics is universal, and as valid for observers as for particles; that the choices the observers make are free of peculiar biases induced by a godlike superdeterminism; and that physics is local, free of all but the most limited form of “spooky action” at a distance.
Yet their analysis shows the contradictions of Wigner’s paradox persist. The team’s tabletop experiment, in which they created entangled photons, also backs up the paradox. Optical elements steered each photon onto a path that depended on its polarization: the equivalent of the friends’ observations. The photon then entered a second set of elements and detectors that played the role of the Wigners. The team found, again, an irreconcilable mismatch between the friends and the Wigners. What is more, they varied exactly how entangled the particles were and showed that the mismatch occurs for different conditions than in Brukner’s scenario. “That shows that we really have something new here,” Tischler says.
It also indicates that one of the four assumptions has to give. Few physicists believe superdeterminism could be to blame. Some see locality as the weak point, but its failure would be stark: One observer’s actions would affect another’s results even across great distances—a stronger kind of nonlocality than the type quantum theorists often consider. So some are questioning the tenet that observers can pool their measurements empirically. “There are facts for one observer, and facts for another; they need not mesh,” suggests study co-author and Griffith physicist Howard Wiseman. It is a radical relativism, still jarring to many. “From a classical perspective, what everyone sees is considered objective, independent of what anyone else sees,” says Olimpia Lombardi, a philosopher of physics at the University of Buenos Aires.
And then there is Wigner’s conclusion that quantum mechanics itself breaks down. Of the assumptions, it is the most directly testable, by experiments that are probing quantum mechanics on ever larger scales.
Toyota already operates a “Mobility Services Platform” that it says helps it to “develop, deploy, and manage the next generation of data-driven mobility services for driver and passenger safety, security, comfort, and convenience”.
That data comes from a device called the “Data Communication Module” (DCM) that Toyota fits into many models in Japan, the USA and China.
Toyota reckons the data could turn into “new contextual services such as car share, rideshare, full-service lease, and new corporate and consumer services such as proactive vehicle maintenance notifications and driving behavior-based insurance.”
Toyota’s connected car vision. Click to enlarge
The company has touted that vision since at least the year 2016, but precious little evidence of it turning into products is available.
Which may be why Toyota has signed with AWS for not just cloud tech but also professional services.
The two companies say their joint efforts “will help build a foundation for streamlined and secure data sharing throughout the company and accelerate its move toward CASE (Connected, Autonomous/Automated, Shared and Electric) mobility technologies.”
Neither party has specified just which bits of the AWS cloud Toyota will take for a spin but it seems sensible to suggest the auto-maker is going to need lots of storage and analytics capabilities, making AWS S3 and Kinesis likely candidates for a test drive.
Whatever Toyota uses, prepare for privacy ponderings because while cheaper car insurance sounds lovely, having an insurer source driving data from a manufacturer has plenty of potential pitfalls.
There’s a microSD card slot above the SIM tray, which supports cards up to 2TB in size. While it can be used as extra storage, just like the SD slots in Android phones and tablets, it can also function as a bootable drive. If you write an operating system image to the SD card and put it in the PinePhone, the phone will boot from the SD card. This means you can move between operating systems on the PinePhone by simply swapping microSD cards, which is amazing for trying out new Linux distributions without wiping data. How great would it be if Android phones could do that?
Finally, the inside of the PinePhone has six hardware killswitches that can be manipulated with a screwdriver. You can use them to turn off the modem, Wi-Fi/Bluetooth, microphone, rear camera, front camera, and headphone jack. No need to put a sticker over the selfie camera if you’re worried about malicious software — just flip the switch and never worry about it again…. For a $150 phone produced in limited batches by a company with no previous experience in the smartphone industry, I’m impressed it’s built as well as it is…
I look forward to seeing what the community around the PinePhone can accomplish.
A Pine64 blog post this weekend touts “a boat-load of cool and innovative things” being attempted by the PinePhone community, including users working on things like a fingerprint scanner or a thermal camera, plus a community that’s 3D-printing their own custom PinePhone cases. And Pine64 has now identified three candidates for a future keyboard option (each of which can be configured as either a slide-out or clamshell keyboard): I feel like we have finally gotten into a good production rhythm; it was only last month we announced the postmarketOS Community Edition of the PinePhone, and this month I am here to tell you that the factory will deliver the phones to us at the end of this month… I don’t know about you, but I think that this is a rather good production pace. At the time of writing, and based on current sale rates, the postmarketOS production-run will sell out in a matter of days…
President Donald Trump frankly acknowledged Thursday that he’s starving the U.S. Postal Service of money in order to make it harder to process an expected surge of mail-in ballots, which he worries could cost him the election.In an interview on Fox Business Network, Trump explicitly noted two funding provisions that Democrats are seeking in a relief package that has stalled on Capitol Hill. Without the additional money, he said, the Postal Service won’t have the resources to handle a flood of ballots from voters who are seeking to avoid polling places during the coronavirus pandemic.“If we don’t make a deal, that means they don’t get the money,” Trump told host Maria Bartiromo. “That means they can’t have universal mail-in voting; they just can’t have it.”Trump’s statements, including the false claim that Democrats are seeking universal mail-in voting, come as he is searching for a strategy to gain an advantage in his November matchup against Joe Biden. He’s pairing the tough Postal Service stance in congressional negotiations with an increasingly robust mail-in -voting legal fight in states that could decide the election.
Everyone is mad about Apple’s App Store guidelines right now, especially when it comes to cloud gaming services. Microsoft isn’t bringing Project xCloud to iOS. Google’s Stadia app can’t let iPhone users actually play games. Facebook also had to axe the ability to play games for its Facebook Gaming iOS app to be allowed in the App Store. And that doesn’t even take into account the number of smaller, non-gaming app developers who have had their apps kicked out of the App Store after seemingly arbitrary enforcement of Apple’s guidelines. But Fortnite developer Epic Games took a bold step toward telling Apple what it thinks of the company’s App Store policies, possibly attempting a loophole to get around things. Fortnite has now been kicked out of both Apple and Google’s stores, and Epic is now suing Apple.
Yesterday, Epic Games introduced the ability to pay the company directly for V-Bucks in the Fortnite app on the App Store and in Google Play store for Android, bypassing the in-app payment methods in both apps. On top of that, Epic Games is giving users a 20% discount for using the direct payment method. According to Apple, in a statement to the Verge, this is in violation of App Store guidelines, which states that apps offering in-game currency for real money cannot use a direct payment method.
[…]
Before removal, a screenshot of the Fornite app on iOS clearly showed that users have the option to either purchase V-bucks through the App Store or send a direct payment to Epic Games.
“Today, we’re also introducing a new way to pay on iOS and Android: Epic direct payment. When you choose to use Epic direct payments, you save up to 20% as Epic passes along payment processing savings to you,” Epic Games announced in a press release this morning.
Image: Epic Games
Google’s policies also seem to prevent developers from using anything but an in-app payment system.
[…]
Epic Games pointed out that both Apple and Google collect a 30% fee, and that if users choose to pay through either store’s app they will not benefit from the 20% discount—hence the lower price on the direct payment option.
“If Apple or Google lower their fees on payments in the future, Epic will pass along the savings to you.”
Damn, Epic. Shots fired.
[…]
The problem for Apple is that both it and Google have policies related to purchases that are consumed outside of their respective app stores. Both allow users to make payments outside of the app.
[…]
Fortnite is available on multiple platforms: PC, Mac, Xbox, PlayStation, Nintendo Switch, Android, and iOS, and users can link their profiles together so they can play with the same account across all platforms. This means that someone could purchase V-Bucks through the Android and iOS apps and spend them at a later date from their console or PC. So technically those users appear to be purchasing “goods or services” that can be consumed outside of the app.
[…]
Epic has taken legal action to end Apple’s anti-competitive restrictions on mobile device marketplaces. The papers are available to read here.
From the legal filing: “Rather than tolerate this healthy competition and compete on the merits of its offering, Apple responded by removing Fortnite from sale on the App Store, which means that new users cannot download the app, and users who have already downloaded prior versions of the app from the App Store cannot update it to the latest version. This also means that Fortnite players who downloaded their app from the App Store will not receive updates to Fortnite through the App Store, either automatically or by searching the App Store for the update. Apple’s removal of Fortnite is yet another example of Apple flexing its enormous power in order to impose unreasonable restraints and unlawfully maintain its 100% monopoly over the iOS In-App Payment Processing Market.”
And the fallout has been some compelling entertainment in these quarantine times: Apple swiftly kicked Fortnite from its store, then Epic struck back with a lawsuit and arranged an in-game event to screen a video satirizing Apple’s iconic 1984 commercial to mobilize its fanbase against the company, throwing in a #FreeFortnite hashtag to boot.
[…]
ust as it did with Apple, Epic Games has now filed a lawsuit against Google over alleged antitrust violations just hours after Fortnite was dropped from the Play Store. The suit alleges that Google’s stipulations about in-app purchases constitute a monopoly in clear violation of both the Sherman Act and California’s Cartwright Act.
Epic’s complaint is nearly identical to the lawsuit against Apple that it filed earlier today following Fortnite’s removal from the company’s app store. Only the lawsuit’s introductions differ significantly, with the one against Apple referencing its aforementioned 1984 ad and the one against Google recalling the infamous “Don’t Be Evil” mantra the company was founded upon.
“Twenty-two years later, Google has relegated its motto to nearly an afterthought, and is using its size to do evil upon competitors, innovators, customers, and users in a slew of markets it has grown to monopolize,” the suit argues.
Following a year-long investigation into the company, Reuters reports Russia’s Federal Antimonopoly Service (FAS) has found the iPhone-maker abused its dominant position in the mobile app marketplace and will order Apple to resolve multiple regulatory breaches.
The agency started investigating the tech giant after developer Kaspersky Lab filed a complaint over the rejection of its Safe Kids app from the App Store. At the time, Apple said the software put “user’s safety and privacy at risk.” The agency ruled Apple forces developers to distribute to their apps through the App Store and then unlawfully blocks them. A spokesperson for Apple told Reuters the company plans to appeal the ruling.
The decision comes as Apple faces increasing scrutiny over its gatekeeping of the App Store in both the US and EU. When Tim Cook testified before the House Judiciary Antitrust Subcommittee at the end of July, lawmakers asked the executive about the company’s decisions to block some competitors from its digital marketplace. Cook was also asked about the ongoing 30 percent cut the company takes from third-party app sales, a rate many developers argue is too high. Apple was again in the spotlight earlier this month after it said it would not allow Microsoft’s Project xCloud on iOS since its App Store guidelines require developers to submit games individually for review.
In December 2019 I wrote about The Growing Problem of Malicious Relays on the Tor Network with the motivation to rise awareness and to improve the situation over time. Unfortunately instead of improving, things have become even worse, specifically when it comes to malicious Tor exit relay activity.
Tor exit relays are the last hop in the chain of 3 relays and the only type of relay that gets to see the connection to the actual destination chosen by the Tor Browser user. The used protocol (i.e. http vs. https) by the user decides whether a malicious exit relay can actually see and manipulate the transferred content or not.
[…]
One key question of malicious relay analysis always is: What hosting companies did they use? So here is a break down by used internet service provider. It is mostly OVH (one of the — generally speaking — largest ISPs used for Tor relays). Frantech, ServerAstra and Trabia Network are also known providers for relays. “Nice IT Services Group” looks interesting, since I’ve never seen relays on this obscure network before the attacker added some of his relays there on 2020–04–16.
[…]
The full extend of their operations is unknown, but one motivation appears to be plain and simple: profit.
They perform person-in-the-middle attacks on Tor users by manipulating traffic as it flows through their exit relays. They (selectively) remove HTTP-to-HTTPS redirects to gain full access to plain unencrypted HTTP traffic without causing TLS certificate warnings. It is hard to detect for Tor Browser users that do not specifically look for the “https://” in the URL bar.
[…]
There are established countermeasures, namely HSTS Preloading and HTTPS Everywhere, but in practice many website operators do not implement them and leave their users vulnerable to this kind of attack. This kind of attack is not specific to Tor Browser. Malicious relays are just used to gain access to user traffic. To make detection harder, the malicious entity did not attack all websites equally. It appears that they are primarily after cryptocurrency related websites — namely multiple bitcoin mixer services. They replaced bitcoin addresses in HTTP traffic to redirect transactions to their wallets instead of the user provided bitcoin address. Bitcoin address rewriting attacks are not new, but the scale of their operations is. It is not possible to determine if they engage in other types of attacks.
The malicious Tor relay operator discussed in this blog post controlled over 23% of the entire Tor network exit capacity (as of 2020–05–22)
The malicious operator demonstrated to recover their capacity after initial removal attempts by Tor directory authorities.
There are multiple indicators that suggest that the attacker still runs >10% of the Tor network exit capacity (as of 2020–08–08)
The reoccurring events of large scale malicious Tor relay operations make it clear that current checks and approaches for bad-relays detection are insufficient to prevent such events from reoccurring and that the threat landscape for Tor users has changed.
Multiple specific countermeasures have been proposed to tackle the ongoing issue of malicious relay capacity.
It is up to the Tor Project and the Tor directory authorities to act to prevent further harm to Tor users.
The U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case. This judgment declared that this framework is no longer a valid mechanism to transfer personal data from the European Union to the United States.
The European Union and the United States recognize the vital importance of data protection and the significance of cross-border data transfers to our citizens and economies. We share a commitment to privacy and the rule of law, and to further deepening our economic relationship, and have collaborated on these matters for several decades.
More than 3.7 million. That’s the latest number of surveillance cameras, baby monitors, doorbells with webcams, and other internet-connected devices found left open to hijackers via two insecure communications protocols globally, we’re told.
This is up from estimates of a couple of million last year. The protocols are CS2 Network P2P, used by more than 50 million devices worldwide, and Shenzhen Yunni iLnkP2P, used by more than 3.6 million. The P2P stands for peer-to-peer. The devices’ use of the protocols cannot be switched off.
The upshot is Internet-of-Things gadgets using vulnerable iLnkP2P implementations can be discovered and accessed by strangers, particularly if the default password has not been changed or is easily guessed. Thus miscreants can abuse the protocol to spy on poorly secured cameras and other equipment dotted all over the world (CVE-2019-11219). iLnkP2P connections can also be intercepted by eavesdroppers to snoop on live video streams, login details, and other data (CVE-2019-11220).
Meanwhile, CS2 Network P2P can fall to the same sort of snooping as iLnkP2P (CVE-2020-9525, CVE-2020-9526). iLnkP2P is, we’re told, functionally identical to CS2 Network P2P though there are some differences.
The bugs were found by Paul Marrapese, who has a whole site, hacked.camera, dedicated to the vulnerabilities. “As of August 2020, over 3.7 million vulnerable devices have been found on the internet,” reads the site, which lists affected devices and advice on what to do if you have any at-risk gear. (Summary: throw it away, or try firewalling it off.)
He went public with the CS2 Network P2P flaws this month after being told in February by the protocol’s developers the weaknesses will be addressed in version 4.0. In 2019, he tried to report the iLnkP2P flaws to developers Shenzhen Yunni, received no response, and went public with those bugs in April that year.
At this year’s DEF CON hacking conference, held online last week, Marrapese gave an in-depth dive into the insecure protocols, which you can watch below.
“When hordes of insecure things get put on the internet, you can bet the end result is not going to be pretty,” Marrapese, a red-team member at an enterprise cloud biz, told his web audience. “A $40 purchase from Amazon is all you need to start hacking into devices.”
The protocols use UDP port 32100, and are outlined here by Fabrizio Bertone, who reverse engineered them in 2017. Essentially, they’re designed to let non-tech-savvy owners access their devices, wherever they are. The equipment contacts central servers to announce they’re powered up, and they stay connected by sending heartbeat messages to the servers. These cloud-hosted servers thus know which IP addresses the gadgets are using, and stay in constant touch with the devices.
When a user wants to connect to their device, and starts an app to log into their gadget, the servers will tell the app how to connect to the camera, or whatever it may be, either via the local network or over the internet. If need be, the device and app will be instructed to use something called UDP hole punching to talk to each other through whatever NATs may be in their way, or via a relay if that doesn’t work. This allows the device to be used remotely by the app without having to, say, change any firewall or NAT settings on their home router. The app and device find a way to talk to each other.
“In the context of IoT, P2P is a feature that lets people to connect to their device anywhere in the world without any special setup,” Marrapese said. “You have to remember, some folks don’t even know how to log into their routers, never mind forward a port.”
In the case of iLnkP2P, it turned out it was easy to calculate the unique IDs of strangers’ devices, and thus use the protocol to find and connect to them. The IDs are set at the factory and can’t be changed. Marrapese was able to enumerate millions of gadgets, and use their IP addresses to approximate their physical location, showing equipment scattered primarily across Asia, the UK and Europe, and North America. Many accept the default password, and thus can be accessed by miscreants scanning the internet for vulnerable P2P-connected cameras and the like. According to Marrapese, thousands of new iLnkP2P-connected devices appear online every month.
President Trump said Monday that TikTok will be shut down in the U.S. if it hasn’t been bought by Microsoft or another company by Sept. 15, and argued — without elaborating — that the U.S. Treasury should get “a very substantial portion” of the sale fee.
Why it matters: Trump appears to have backed off his threat to immediately ban TikTok after speaking with Microsoft CEO Satya Nadella, who said Sunday that the company will pursue discussions with TikTok’s Chinese parent company ByteDance to purchase the app in the U.S.
The big picture: TikTok has come under intense scrutiny in the U.S. due to concerns that the vast amounts of data it collects could be accessed by the Chinese government, potentially posing a national security threat.
Negotiations between TikTok and Microsoft will be overseen by a special government panel called the Committee on Foreign Investment in the United States (CFIUS), Reuters reports.
What he’s saying: Trump appeared to suggest on Monday that Microsoft would have to pay the U.S. government in order to complete the deal, but did not explain the precedent for such an action. He also argued that Microsoft should buy all of TikTok, not just 30% of the company.
“I don’t mind if, whether it’s Microsoft or somebody else, a big company, a secure company, a very American company, buy it. It’s probably easier to buy the whole thing than to buy 30% of it. How do you do 30%? Who’s going to get the name? The name is hot, the brand is hot,” Trump said.
“A very substantial portion of that price is going to have to come into the Treasury of the United States. Because we’re making it possible for this deal to happen. Right now they don’t have any rights, unless we give it to them. So if we’re going to give them the rights, it has to come into this country. It’s a little bit like the landlord/tenant,” he added.
Our thought bubble, via Axios’ Dan Primack: Trump’s inexplicable claim that part of Microsoft’s purchase price would have to go to the Treasury is skating very close to announcing extortion.
Misconfigured AWS S3 storage buckets exposing massive amounts of data to the internet are like an unexploded bomb just waiting to go off, say experts.
The team at Truffle Security said its automated search tools were able to stumble across some 4,000 open Amazon-hosted S3 buckets that included data companies would not want public – things like login credentials, security keys, and API keys.
In fact, the leak hunters say that exposed data was so common, they were able to count an average of around 2.5 passwords and access tokens per file analyzed per repository. In some cases, more than 10 secrets were found in a single file; some files had none at all.
These credentials included SQL Server passwords, Coinbase API keys, MongoDB credentials, and logins for other AWS buckets that actually were configured to ask for a password.
That the Truffle Security team was able to turn up roughly 4,000 insecure buckets with private information shows just how common it is for companies to leave their cloud storage instances unguarded.
Though AWS has done what it can to get customers to lock down their cloud instances, finding exposed storage buckets and databases is pretty trivial for trained security professionals to pull off.
In some cases, the leak-hunters have even partnered up with law firms, collecting referral fees when they send aggrieved customers to take part in class-action lawsuits against companies that exposed their data.
Starting at the end of July, Microsoft has begun detecting HOSTS files that block Windows 10 telemetry servers as a ‘Severe’ security risk.
The HOSTS file is a text file located at C:\Windows\system32\driver\etc\HOSTS and can only be edited by a program with Administrator privileges.
[…]
Microsoft now detects HOSTS files that block Windows telemetry
Since the end of July, Windows 10 users began reporting that Windows Defender had started detecting modified HOSTS files as a ‘SettingsModifier:Win32/HostsFileHijack’ threat.
When detected, if a user clicks on the ‘See details’ option, they will simply be shown that they are affected by a ‘Settings Modifier’ threat and has ‘potentially unwanted behavior,’ as shown below.
SettingsModifier:Win32/HostsFileHijack detection
BleepingComputer first learned about this issue from BornCity, and while Microsoft Defender detecting HOSTS hijacks is not new, it was strange to see so many people suddenly reporting the detection [1, 2, 3, 4, 5].
While a widespread infection hitting many consumers simultaneously in the past is not unheard of, it is quite unusual with the security built into Windows 10 today.
[…]
Microsoft had recently updated their Microsoft Defender definitions to detect when their servers were added to the HOSTS file.
Users who utilize HOSTS files to block Windows 10 telemetry suddenly caused them to see the HOSTS file hijack detection.
In our tests, some of the Microsoft hosts detected in the Windows 10 HOSTS file include the following:
If you decide to clean this threat, Microsoft will restore the HOSTS file back to its default contents.
Default Windows 10 HOSTS file
Users who intentionally modify their HOSTS file can allow this ‘threat,’ but it may enable all HOSTS modifications, even malicious ones, going forward.
So only allow the threat if you 100% understand the risks involved in doing so.
BleepingComputer has reached out to Microsoft with questions regarding this new detection.
A hacker has published today a list of plaintext usernames and passwords, along with IP addresses for more than 900 Pulse Secure VPN enterprise servers.
ZDNet, which obtained a copy of this list with the help of threat intelligence firm KELA, verified its authenticity with multiple sources in the cyber-security community.
According to a review, the list includes:
IP addresses of Pulse Secure VPN servers
Pulse Secure VPN server firmware version
SSH keys for each server
A list of all local users and their password hashes
Admin account details
Last VPN logins (including usernames and cleartext passwords)
VPN session cookies
Image: ZDNet
Bank Security, a threat intelligence analyst specialized in financial crime and the one who spotted the list earlier today and shared it with ZDNet, made an interesting observation about the list and its content.
The security researcher noted that all the Pulse Secure VPN servers included in the list were running a firmware version vulnerable to the CVE-2019-11510 vulnerability.
Bank Security believes that the hacker who compiled this list scanned the entire internet IPv4 address space for Pulse Secure VPN servers, used an exploit for the CVE-2019-11510 vulnerability to gain access to systems, dump server details (including usernames and passwords), and then collected all the information in one central repository.
Based on timestamps in the list (a collection of folders), the dates of the scans, or the date the list was compiled, appear to between June 24 and July 8, 2020.
With over 3 billion users globally, smartphones are an integral, almost inseparable part of our day-to-day lives.
As the mobile market continues to grow, vendors race to provide new features, new capabilities and better technological innovations in their latest devices. To support this relentless drive for innovation, vendors often rely on third parties to provide the required hardware and software for phones. One of the most common third-party solutions is the Digital Signal Processor unit, commonly known as DSP chips.
In this research dubbed “Achilles” we performed an extensive security review of a DSP chip from one of the leading manufacturers: Qualcomm Technologies. Qualcomm provides a wide variety of chips that are embedded into devices that make up over 40% of the mobile phone market, including high-end phones from Google, Samsung, LG, Xiaomi, OnePlus and more.
More than 400 vulnerable pieces of code were found within the DSP chip we tested, and these vulnerabilities could have the following impact on users of phones with the affected chip:
Attackers can turn the phone into a perfect spying tool, without any user interaction required – The information that can be exfiltrated from the phone include photos, videos, call-recording, real-time microphone data, GPS and location data, etc.
Attackers may be able to render the mobile phone constantly unresponsive – Making all the information stored on this phone permanently unavailable – including photos, videos, contact details, etc – in other words, a targeted denial-of-service attack.
Malware and other malicious code can completely hide their activities and become un-removable.
We disclosed these findings with Qualcomm, who acknowledged them, notified the relevant device vendors and assigned them with the following CVE’s : CVE-2020-11201, CVE-2020-11202, CVE-2020-11206, CVE-2020-11207, CVE-2020-11208 and CVE-2020-11209.
New York state is introducing a bill that would make it easier to sue big tech companies for alleged abuses of their monopoly powers.
New York is America’s financial center and one of its most important tech hubs. If successfully passed, the law could serve as a model for future legislation across the country. It also comes as a federal committee is conducting an anti-trust investigation into tech giants amid concerns that their unmatched market power is suppressing competition.
Bill S8700A, now being discussed by New York’s senate consumer protection committee, would update New York’s antiquated antitrust laws for the 21st century, said the bill’s sponsor, Senator Mike Gianaris.
“Their power has grown to dangerous levels and we need to start reining them in,” he said.
New York’s antitrust laws currently require two players to collaborate in a conspiracy to conduct anticompetitive behavior such as price setting. In other cases companies may underprice products to the point where they are even incurring a loss just to drive others out of the market – anticompetitive behavior that New York’s laws would currently struggle to prosecute.
“Our laws on antitrust in New York are a century old and they were built for a completely different economy,” said Gianaris. “Much of the problem today in the 21st century is unilateral action by some of these behemoth tech companies and this bill would allow, for the first time, New York to engage in antitrust enforcement for unilateral action.”
The bill will probably be discussed when New York’s senate returns to work in August but is unlikely to pass before next year. It has the support of New York’s attorney general, Letitia James.