Hackers had access to dashboards used to remotely manage and control thousands of credit card payment terminals manufactured by digital payments giant Wiseasy, a cybersecurity startup told TechCrunch.
Wiseasy is a brand you might not have heard of, but it’s a popular Android-based payment terminal maker used in restaurants, hotels, retail outlets and schools across the Asia-Pacific region. Through its Wisecloud cloud service, Wiseeasy can remotely manage, configure and update customer terminals over the internet.
But Wiseasy employee passwords used for accessing Wiseasy’s cloud dashboards — including an “admin” account — were found on a dark web marketplace actively used by cybercriminals, according to the startup.
Youssef Mohamed, chief technology officer at pen-testing and dark web monitoring startup Buguard, told TechCrunch that the passwords were stolen by malware on the employee’s computers. Mohamed said two cloud dashboards were exposed, but neither were protected with basic security features, like two-factor authentication, and allowed hackers to access nearly 140,000 Wiseasy payment terminals around the world.
[…]
Buguard said it first contacted Wiseasy about the compromised dashboards in early July, but efforts to disclose the compromise were met with meetings with executives that were later canceled without warning, and according to Mohamed, the company declined to say if or when the cloud dashboards would be secured.
Screenshots of the dashboards seen by TechCrunch show an “admin” user with remote access to Wiseasy payment terminals, including the ability to lock the device and remotely install and remove apps. The dashboard also allowed anyone to view names, phone numbers, email addresses and access permissions for Wiseasy dashboard users, including the ability to add new users.
Another dashboard view also shows the Wi-Fi name and plaintext password of the network that payment terminals are connected to.
Mohamed said anyone with access to the dashboards could control Wiseasy payment terminals and make configuration changes.
In a setback for Visa in a case alleging the payment processor is liable for the distribution of child pornography on Pornhub and other sites operated by parent company MindGeek, a federal judge ruled that it was reasonable to conclude that Visa knowingly facilitated the criminal activity.
On Friday, July 29, U.S. District Judge Cormac Carney of the U.S. District Court of the Central District of California issued a decision in the Fleites v. MindGeek case, denying Visa’s motion to dismiss the claim it violated California’s Unfair Competition Law — which prohibits unlawful, unfair or fraudulent business acts and practices — by processing payments for child porn. (A copy of the decision is available at this link.)
In the ruling, Carney held that the plaintiff “adequately alleged” that Visa engaged in a criminal conspiracy with MindGeek to monetize child pornography. Specifically, he wrote, “Visa knew that MindGeek’s websites were teeming with monetized child porn”; that there was a “criminal agreement to financially benefit from child porn that can be inferred from [Visa’s] decision to continue to recognize MindGeek as a merchant despite allegedly knowing that MindGeek monetized a substantial amount of child porn”; and that “the court can comfortably infer that Visa intended to help MindGeek monetize child porn” by “knowingly provid[ing] the tool used to complete the crime.”
“When MindGeek decides to monetize child porn, and Visa decides to continue to allow its payment network to be used for that goal despite knowledge of MindGeek’s monetization of child porn, it is entirely foreseeable that victims of child porn like plaintiff will suffer the harms that plaintiff alleges,” Carney wrote.
In a statement, a Visa spokesperson said: “Visa condemns sex trafficking, sexual exploitation and child sexual abuse materials as repugnant to our values and purpose as a company. This pre-trial ruling is disappointing and mischaracterizes Visa’s role and its policies and practices. Visa will not tolerate the use of our network for illegal activity. We continue to believe that Visa is an improper defendant in this case.”
A rep for MindGeek provided this statement: “At this point in the case, the court has not yet ruled on the veracity of the allegations, and is required to assume all of the plaintiff’s allegations are true and accurate. When the court can actually consider the facts, we are confident the plaintiff’s claims will be dismissed for lack of merit. MindGeek has zero tolerance for the posting of illegal content on its platforms, and has instituted the most comprehensive safeguards in user-generated platform history.”
The company’s statement continued, “We have banned uploads from anyone who has not submitted government-issued ID that passes third-party verification, eliminated the ability to download free content, integrated several leading technological platform and content moderation tools, instituted digital fingerprinting of all videos found to be in violation of our Non-Consensual Content and CSAM [child sexual abuse material] Policies to help protect against removed videos being reposted, expanded our moderation workforce and processes, and partnered with dozens of non-profit organizations around the world. Any insinuation that MindGeek does not take the elimination of illegal material seriously is categorically false.”
Babel Finance, the Hong Kong-based crypto lender, apparently had other designs when its worldwide user base handed over their crypto to the company than just borrowing and lending. It seems to have been doing what everyone else does with crypto, rapidly speculating and trying to make “line go up.” Of course, all that changed when the line no longer went up.
The Block reported based on restructuring proposal documents that Babel Finance had lost 8,000 bitcoin and 56,000 ether in June, worth close to $280 million, though of course the price is constantly fluctuating. The company had apparently been conducting proprietary trading with customers’ funds. It remains unclear based on reporting if users were/are aware their crypto was/is being used in this way.
Built by Sony AI, a research lab launched by the company in 2020, Gran Turismo Sophy is a computer program trained to control racing cars inside the world of Gran Turismo, a video game known for its super-realistic simulations of real vehicles and tracks. In a series of events held behind closed doors last year, Sony put its program up against the best humans on the professional sim-racing circuit.
What they discovered during those racetrack battles—and the ones that followed—could help shape the future of machines that work alongside humans, or join us on the roads.
[…]
Sony soon learned that speed alone wasn’t enough to make GT Sophy a winner. The program outpaced all human drivers on an empty track, setting superhuman lap times on three different virtual courses. Yet when Sony tested GT Sophy in a race against multiple human drivers, where intelligence as well as speed is needed, GT Sophy lost. The program was at times too aggressive, racking up penalties for reckless driving, and at other times too timid, giving way when it didn’t need to.
Sony regrouped, retrained its AI, and set up a rematch in October. This time GT Sophy won with ease. What made the difference? It’s true that Sony came back with a larger neural network, giving its program more capabilities to draw from on the fly. But ultimately, the difference came down to giving GT Sophy something that Peter Wurman, head of Sony AI America, calls “etiquette”: the ability to balance its aggression and timidity, picking the most appropriate behavior for the situation at hand.
This is also what makes GT Sophy relevant beyond Gran Turismo. Etiquette between drivers on a track is a specific example of the kind of dynamic, context-aware behavior that robots will be expected to have when they interact with people, says Wurman.
An awareness of when to take risks and when to play it safe would be useful for AI that is better at interacting with people, whether it be on the manufacturing floor, in home robots, or in driverless cars.
“I don’t think we’ve learned general principles yet about how to deal with human norms that you have to respect,” says Wurman. “But it’s a start and hopefully gives us some insight into this problem in general.”
Twitter has published its 20th transparency report, and the details still aren’t reassuring to those concerned about abuses of personal info. The social network saw “record highs” in the number of account data requests during the July-December 2021 reporting period, with 47,572 legal demands on 198,931 accounts. The media in particular faced much more pressure. Government demands for data from verified news outlets and journalists surged 103 percent compared to the last report, with 349 accounts under scrutiny.
The largest slice of requests targeting the news industry came from India (114), followed by Turkey (78) and Russia (55). Governments succeeded in withholding 17 tweets.
As in the past, US demands represented a disproportionately large chunk of the overall volume. The country accounted for 20 percent of all worldwide account info requests, and those requests covered 39 percent of all specified accounts. Russia is still the second-largest requester with 18 percent of volume, even if its demands dipped 20 percent during the six-month timeframe.
The company said it was still denying or limiting access to info when possible. It denied 31 percent of US data requests, and either narrowed or shut down 60 percent of global demands. Twitter also opposed 29 civil attempts to identify anonymous US users, citing First Amendment reasons. It sued in two of those cases, and has so far had success with one of those suits. There hasn’t been much success in reporting on national security-related requests in the US, however, and Twitter is still hoping to win an appeal that would let it share more details.
You can find AI that creates new images, but what if you want to fix an old family photo? You might have a no-charge option. Louis Bouchard and PetaPixel have drawn attention to a free tool recently developed by Tencent researchers, GFP-GAN (Generative Facial Prior-Generative Adversarial Network), that can restore damaged and low-resolution portraits. The technology merges info from two AI models to fill in a photo’s missing details with realistic detail in a few seconds, all the while maintaining high accuracy and quality.
Conventional methods fine-tune an existing AI model to restore images by gauging differences between the artificial and real photos. That frequently leads to low-quality results, the scientists said. The new approach uses a pre-trained version of an existing model (NVIDIA’s StyleGAN-2) to inform the team’s own model at multiple stages during the image generation process. The technique aims to preserve the “identity” of people in a photo, with a particular focus on facial features like eyes and mouths.
You can try a demo of GFP-GAN for free. The creators have also posted their code to let anyone implement the restoration tech in their own projects.
This project is still bound by the limitations of current AI. While it’s surprisingly accurate, it’s making educated guesses about missing content. The researchers warned that you might see a “slight change of identity” and a lower resolution than you might like. Don’t rely on this to print a poster-sized photo of your grandparents, folks. All the same, the work here is promising — it hints at a future where you can easily rescue images that would otherwise be lost to the ravages of time.
Energy, mass, velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.
This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.
The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double pendulum known to have exactly four “state variables”—the angle and angular velocity of each of the two arms. After a few hours of analysis, the AI produced the answer: 4.7.
The image shows a chaotic swing stick dynamical system in motion. The work aims at identifying and extracting the minimum number of state variables needed to describe such system from high dimensional video footage directly. Credit: Yinuo Qin/Columbia Engineering
“We thought this answer was close enough,” said Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering, where the work was primarily done. “Especially since all the AI had access to was raw video footage, without any knowledge of physics or geometry. But we wanted to know what the variables actually were, not just their number.”
The researchers then proceeded to visualize the actual variables that the program identified. Extracting the variables themselves was not easy, since the program cannot describe them in any intuitive way that would be understandable to humans. After some probing, it appeared that two of the variables the program chose loosely corresponded to the angles of the arms, but the other two remain a mystery.
“We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities,” explained Boyuan Chen Ph.D., now an assistant professor at Duke University, who led the work. “But nothing seemed to match perfectly.” The team was confident that the AI had found a valid set of four variables, since it was making good predictions, “but we don’t yet understand the mathematical language it is speaking,” he explained.
After validating a number of other physical systems with known solutions, the researchers fed videos of systems for which they did not know the explicit answer. The first videos featured an “air dancer” undulating in front of a local used car lot. After a few hours of analysis, the program returned eight variables. A video of a lava lamp also produced eight variables. They then fed a video clip of flames from a holiday fireplace loop, and the program returned 24 variables.
A particularly interesting question was whether the set of variable was unique for every system, or whether a different set was produced each time the program was restarted.
“I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe in a different way?” said Lipson. “Perhaps some phenomena seem enigmatically complex because we are trying to understand them using the wrong set of variables. In the experiments, the number of variables was the same each time the AI restarted, but the specific variables were different each time. So yes, there are alternative ways to describe the universe and it is quite possible that our choices aren’t perfect.”
The researchers believe that this sort of AI can help scientists uncover complex phenomena for which theoretical understanding is not keeping pace with the deluge of data—areas ranging from biology to cosmology. “While we used video data in this work, any kind of array data source could be used—radar arrays, or DNA arrays, for example,” explained Kuang Huang, Ph.D., who co-authored the paper.
The work is part of Lipson and Fu Foundation Professor of Mathematics Qiang Du’s decades-long interest in creating algorithms that can distill data into scientific laws. Past software systems, such as Lipson and Michael Schmidt’s Eureqa software, could distill freeform physical laws from experimental data, but only if the variables were identified in advance. But what if the variables are yet unknown?
Lipson, who is also the James and Sally Scapa Professor of Innovation, argues that scientists may be misinterpreting or failing to understand many phenomena simply because they don’t have a good set of variables to describe the phenomena.
“For millennia, people knew about objects moving quickly or slowly, but it was only when the notion of velocity and acceleration was formally quantified that Newton could discover his famous law of motion F=MA,” Lipson noted. Variables describing temperature and pressure needed to be identified before laws of thermodynamics could be formalized, and so on for every corner of the scientific world. The variables are a precursor to any theory.
“What other laws are we missing simply because we don’t have the variables?” asked Du, who co-led the work.
The paper was also co-authored by Sunand Raghupathi and Ishaan Chandratreya, who helped collect the data for the experiments.
More information: Boyuan Chen et al, Automated discovery of fundamental variables hidden in experimental data, Nature Computational Science (2022). DOI: 10.1038/s43588-022-00281-6
For a little over 12 hours on 26-27 July, a network operated by Russia’s Rostelecom started announcing routes for part of Apple’s network. The effect was that Internet users in parts of the Internet trying to connect to Apple’s services may have been redirected to the Rostelecom network. Apple Engineering appears to have been successful in reducing the impact, and eventually Rostelecom stopped sending the false route announcements. This event demonstrated, though, how Apple could further protect its networks by using Route Origin Authorizations (ROAs).
We are not aware of any information yet from Apple that indicates what, if any, Apple services were affected. We also have not seen any information from Rostelecom about whether this was a configuration mistake or a deliberate action.
Let’s dig into what we know so far about what happened, and how Route Origin Authorization (ROA) can help prevent these kinds of events.
Around 21:25 UTC On 26 July 2022, Rostelecom’s AS12389 network started announcing 17.70.96.0/19. This prefix is part of Apple’s 17.0.0.0/8 block; usually, Apple only announces the larger 17.0.0.0/9 block and not this shorter prefix length.
When the routes a network is announcing are not covered by valid Route Origin Authorization (ROA), the only option during a route hijack is to announce more specific routes. This is exactly what Apple Engineering did today; upon learning about the hijack, it started announcing 17.70.96.0/21 to direct traffic toward AS714.
RIPE RIS data, captured via pybgpkit tool
It is not clear what AS12389 was doing, as it announced the same prefix at the same time with AS prepend as well.
RIPE RIS data, captured via pybgpkit tool
In the absence of any credible data to filter out any possible hijack attempts, the route announced by AS12389 was propagated across the globe. The incident was picked up by BGPstream.com (Cisco Works) and GRIP Internet Intel (GA Tech).
Apple must have received the alert too. Whatever mitigation techniques they tried didn’t stop the Rostelecom announcement and so Apple announced the more specific route. As per the BGP path selection process, the longest-matching route is preferred first. Prefix length supersedes all other route attributes. Apple started announcing 17.70.96.0/21 to direct traffic toward AS714.
Researchers have unpacked a major cybersecurity find—a malicious UEFI-based rootkit used in the wild since 2016 to ensure computers remained infected even if an operating system is reinstalled or a hard drive is completely replaced.
The firmware compromises the UEFI, the low-level and highly opaque chain of firmware required to boot up nearly every modern computer. As the software that bridges a PC’s device firmware with its operating system, the UEFI—short for Unified Extensible Firmware Interface—is an OS in its own right. It’s located in an SPI-connected flash storage chip soldered onto the computer motherboard, making it difficult to inspect or patch the code. Because it’s the first thing to run when a computer is turned on, it influences the OS, security apps, and all other software that follows.
Exotic, yes. Rare, no.
On Monday, researchers from Kaspersky profiled CosmicStrand, the security firm’s name for a sophisticated UEFI rootkit that the company detected and obtained through its antivirus software. The find is among only a handful of such UEFI threats known to have been used in the wild. Until recently, researchers assumed that the technical demands required to develop UEFI malware of this caliber put it out of reach of most threat actors. Now, with Kaspersky attributing CosmicStrand to an unknown Chinese-speaking hacking group with possible ties to cryptominer malware, this type of malware may not be so rare after all.
“The most striking aspect of this report is that this UEFI implant seems to have been used in the wild since the end of 2016—long before UEFI attacks started being publicly described,” Kaspersky researchers wrote. “This discovery begs a final question: If this is what the attackers were using back then, what are they using today?”
While researchers from fellow security firm Qihoo360 reported on an earlier variant of the rootkit in 2017, Kaspersky and most other Western-based security firms didn’t take notice. Kaspersky’s newer research describes in detail how the rootkit—found in firmware images of some Gigabyte or Asus motherboards—is able to hijack the boot process of infected machines. The technical underpinnings attest to the sophistication of the malware.
The United States’ federal court system “faced an incredibly significant and sophisticated cyber security breach, one which has since had lingering impacts on the department and other agencies.”
That quote comes from congressional representative Jerrold Lewis Nadler, who uttered them on Thursday in his introductory remarks to a House Committee on the Judiciary hearing conducting oversight of the Department of Justice National Security Division (NSD).
Nadler segued into the mention of the breach after mentioning the NSD’s efforts to defend America against external actors that seek to attack its system of government. He commenced his remarks on the attack at the 4:40 mark in the video below:
The rep’s remarks appear to refer to the January 2021 disclosure by James C. Duff, who at the time served as secretary of the Judicial Conference of the United States, of “an apparent compromise” of confidentiality in the Judiciary’s Case Management/Electronic Case Files system (CM/ECF).
That incident may have exploited vulnerabilities in CM/ECF and “greatly risk compromising highly sensitive non-public documents stored on CM/ECF, particularly sealed filings.”
Such documents are filed by the US government in cases that touch on national security, and therefore represent valuable intelligence.
The star witness at the hearing, assistant attorney general for National Security Matthew Olsen, said the Department of Justice continues to investigate the matter, adding the attack has not impacted his unit’s work.
But Olsen was unable – or unwilling – to describe the incident in detail.
However, a report in Politico quoted an unnamed aide as saying “the sweeping impact it may have had on the operation of the Department of Justice is staggering.”
For now, the extent of that impact, and its cause, are not known.
The nature of the vulnerability and the methods used to exploit it are also unknown, but Nadler suggested it is not related to the SolarWinds attack that the Judiciary has already acknowledged.
Olsen said he would update the Committee with further information once that’s possible. Representatives in the hearing indicated they await those details with considerable interest.
The Cyberspace Administration of China has fined ride-sharing company DiDi global ¥8.026 billion ($1.2 billion) for more than 64 billion illegal acts of data collection that it says were carried out maliciously and threatened national security.
Yes, we do mean billion. As in a thousand million.
The Administration enumerated DiDi’s indiscretions as follows:
53.976 billion pieces of information indicating travellers’ intentions were analyzed without informing passengers;
8.323 billion pieces of information were accessed from users’ clipboards and lists of apps;
1.538 billion pieces of information about the cities in which users live were analyzed without permission;
304 million pieces of information describing users’ place of work;
167 million user locations were gathered when users evaluated the DiDi app while it ran in the background;
153 million pieces of information revealing the drivers’ home and business location;
107 million pieces of passenger facial recognition information;
57.8 million pieces of driver’s ID number information in plain text;
53.5092 million pieces of age information;
16.3356 million pieces of occupation information;
11.96 million screenshots were harvested from users’ smartphones;
1.3829 million pieces of family relationship information;
142,900 items describing drivers’ education.
The Administration (CAC) also found DiDi asked for irrelevant permissions on users’ smartphones and did not give an accurate or clear explanation for processing 19 types of personal information.
The fine levied on DiDi is not a run of the mill penalty. The Administration’s Q&A about the incident points out that the fine is a special administrative penalty because DiDi flouted China’s Network Security Law, Data Security Law, and Personal Information Protection Law – and did so for seven years in some cases.
The Q&A adds that China has in recent years introduced many data privacy and information security laws, so it’s not as if DiDi did not have good indicators that it needed to pay attention to such matters.
The fine is around 4.7 percent of DiDi’s annual revenue – just short of the five percent cap on such fines available to Chinese regulators.
Atlassian has warned users of its Bamboo, Bitbucket, Confluence, Fisheye, Crucible, and Jira products that a pair of critical-rated flaws threaten their security.
One of the flaws – CVE-2022-26136 – is described as an arbitrary Servlet Filter bypass that means an attacker could send a specially crafted HTTP request to bypass custom Servlet Filters used by third-party apps to enforce authentication.
The scary part is that the flaw allows a remote, unauthenticated attacker to bypass authentication used by third-party apps. The really scary part is that Atlassian doesn’t have a definitive list of apps that could be impacted.
“Atlassian has released updates that fix the root cause of this vulnerability, but has not exhaustively enumerated all potential consequences of this vulnerability,” it added.
The same CVE can also be exploited in a cross-site scripting attack: a specially crafted HTTP request can bypass the Servlet Filter used to validate legitimate Atlassian Gadgets. “An attacker that can trick a user into requesting a malicious URL can execute arbitrary JavaScript in the user’s browser,” Atlassian explains.
The second flaw – CVE-2022-26137 – is a cross-origin resource sharing (CORS) bypass.
Atlassian explains it as follows: “Sending a specially crafted HTTP request can invoke the Servlet Filter used to respond to CORS requests, resulting in a CORS bypass. An attacker that can trick a user into requesting a malicious URL can access the vulnerable application with the victim’s permissions.”
Confluence users have another flaw to worry about: CVE-2022-26138 reveals that one of its Confluence apps has a hard-coded password in place to help migrations to the cloud. It explained:
Android developers who distribute apps on the Google Play store can now use third-party payment systems in many European countries. The measure applies to the European Economic Area (EEA), which comprises European Union states as well as Iceland, Liechtenstein and Norway. However, the policy will not apply to gaming apps, which still need to use Google Play’s own billing system for the time being.
Google is making the move after the EU’s legislative arm, the European Commission, passed the Digital Markets Act (DMA) this month. Along with the Digital Services Act, the law is designed to rein in the power of big tech by, for instance, prohibiting major platform holders from giving their own systems preferable treatment.
The DMA isn’t expected to come into effect until sometime in 2024. However, Google’s director of EU government affairs and public policy, Estelle Werth, wrote in a blog post that the company is “launching this program now to allow us to work closely with our developer partners and ensure our compliance plans serve the needs of our shared users and the broader ecosystem.”
The move partially reverses a policy that required all in-app payments to be processed through the Play Store’s billing system. Developers who opt for a different billing system won’t be able to avoid Google’s fees entirely. However, Google will lower the service fees it charges them by three percent.
Google says that 99 percent of developers qualify for a fee of 15 percent or less. The others typically pay 30 percent. The fees Google charges would drop to 12 percent (or lower) or 27 percent, respectively, if they select a third-party billing system.
A Russian court fined Google $374 million on Monday for its failure to remove prohibited content, according to the country’s internet watchdog Roskomnadzor.
The Tagansky District Court of Moscow took exception to YouTube content it claimed contained “fakes about the course of a special military operation in Ukraine” and discredited Russia’s armed forces. The court also claimed some material promoted extremism and/or terrorism. Google also stands convicted an “indifferent attitude to the life and health of minors” that the court feels are worthy of protest by Russian citizens.
The court also alleged Google systemically violated Russian law.
As punishment, Google users will receive warnings of the company’s alleged misdeeds, and won’t be permitted to buy ads tied to Google Search results or on YouTube.
A London court on Tuesday authorized a lawsuit that seeks to have Google pay £920 million ($1.1 billion) for overcharging customers for app store purchases.
Filed as a class action on behalf of 19.5 million UK citizens, the suit alleges Google charged commission fees up to 30 percent on app sales. Consumer rights advocate Liz Coll, who previously served as digital policy manager at consumer rights organization Citizens Advice, brought the lawsuit, alleging Google has violated both EU and UK competition laws.
Representatives for the claimant group told Reuters that a detailed judgment has yet to be published, but the initial filing made in July 2021 specifies that Google violated multiple sections of the Competition Act 1998.
For incidents happening before the UK left the EU, the suit also alleged violations of Article 102 of the Treaty on the Functioning of the EU, which covers abuse of dominant market positions.
The Car Last summer I bought a 2021 Hyundai Ioniq SEL. It is a nice fuel-efficient hybrid with a decent amount of features like wireless Android Auto/Apple CarPlay, wireless phone charging, heated seats, & a sunroof. One thing I particularly liked about this vehicle was the In-Vehicle Infotainment (IVI) system. As I mentioned before it had wireless Android Auto which seemed to be uncommon in this price range, and it had pretty nice, smooth animations in its menus which told me the CPU/GPU in it wasn’t completely underpowered, or at least the software it was running wasn’t super bloated.
[greenluigi1] bought a Hyundai Ioniq car, and then, to our astonishment, absolutely demolished the Linux-based head unit firmware. By that, we mean that he bypassed all of the firmware update authentication mechanisms, reverse-engineered the firmware updates, and created subversive update files that gave him a root shell on his own unit. Then, he reverse-engineered the app framework running the dash and created his own app. Not just for show – after hooking into the APIs available to the dash and accessible through header files, he was able to monitor car state from his app, and even lock/unlock doors. In the end, the dash got completely conquered – and he even wrote a tutorial showing how anyone can compile their own apps for the Hyundai Ionic D-Audio 2V dash.
In this series of write-ups [greenluigi1] put together for us, he walks us through the entire hacking process — and they’re a real treat to read. He covers a wide variety of things: breaking encryption of .zip files, reprogramming efused MAC addresses on USB-Ethernet dongles, locating keys for encrypted firmware files, carefully placing backdoors into a Linux system, fighting cryptic C++ compilation errors and flag combinations while cross-compiling the software for the head unit, making plugins for proprietary undocumented frameworks; and many other reverse-engineering aspects that we will encounter when domesticating consumer hardware.
This marks a hacker’s victory over yet another computer in our life that we aren’t meant to modify, and a meticulously documented victory at that — helping each one of us fight back against “unmodifiable” gadgets like these. After reading these tutorials, you’ll leave with a good few new techniques under your belt. We’ve covered head units hacks like these before, for instance, for Subaru and Nissan, and each time it was a journey to behold.
Investigators raised alarm bells when they learned Homeland Security bureaus were buying phone location data to effectively bypass the Fourth Amendment requirement for a search warrant, and now it’s clearer just how extensive those purchases were. TechCrunchnotes the American Civil Liberties Union has obtained records linking Customs and Border Protection, Immigration and Customs Enforcement and other DHS divisions to purchases of roughly 336,000 phone location points from the data broker Venntel. The info represents just a “small subset” of raw data from the southwestern US, and includes a burst of 113,654 points collected over just three days in 2018.
The dataset, delivered through a Freedom of Information Act request, also outlines the agencies’ attempts to justify the bulk data purchases. Officials maintained that users voluntarily offered the data, and that it included no personally identifying information. As TechCrunch explains, though, that’s not necessarily accurate. Phone owners aren’t necessarily aware they opted in to location sharing, and likely didn’t realize the government was buying that data. Moreover, the data was still tied to specific devices — it wouldn’t have been difficult for agents to link positions to individuals.
Some Homeland Security workers expressed internal concerns about the location data. One senior director warned that the Office of Science and Technology bought Venntel info without getting a necessaryPrivacy Threshold Assessment. At one point, the department even halted all projects using Venntel data after learning that key legal and privacy questions had gone unanswered.
More details could be forthcoming, as Homeland Security is still expected to provide more documents in response to the FOIA request. We’ve asked Homeland Security and Venntel for comment. However, the ACLU report might fuel legislative efforts to ban these kinds of data purchases, including the Senate’s bipartisan Fourth Amendment is Not For Sale Act as well as the more recently introduced Health and Location Data Protection Act.
A proposed class-action lawsuit filed on behalf of payment card issuers accuses Apple of illegally profiting from Apple Pay and breaking antitrust laws. Iowa’s Affinity Credit Union is listed as the plaintiff in the complaint, filed today in the US District Court for the Northern District of California. The lawsuit alleges that by restricting contactless payments on iOS devices to Apple Pay and charging payment card issuers fees to use the mobile wallet, the iPhone maker is engaging in anti-competitive behavior.
While Android users have options for contactless mobile wallets, iOS users can only use tap-to-pay technology through Apple Pay. In other words, while iPhone users can download the Google Pay app, they can’t use it to make contactless payments in stores. Android doesn’t charge payment card issuers for use of any supported mobile wallet. But it’s a different story for Apple Pay, which charges card issuers a 0.15% fee on credit transactions and half of a cent on debit transactions. These fees have brought in up to $1 billion annually for Apple, the lawsuit alleges.
“In the Android ecosystem, where multiple digital wallets compete, there are no issuer fees whatsoever, ” said the complaint. “The upshot is that card issuers pay a reported $1 billion annually in fees on Apple Pay and $0 for accessing functionally identical Android wallets. If Apple faced competition, it could not sustain these substantial fees.”
The suit alleges that by restricting iOS users to only Apple Pay for contactless payments, Apple is blocking competing mobile wallets from the market. Payment card issuers are essentially forced to pay Apple’s transaction fees if they want to offer their service to iPhone users.
Apple is facing a similar challenge over its payment system in the EU, where an antitrust commission in May said that the tech giant is illegally blocking third-party developers from enabling contactless payments. Apple has denied the EU’s allegations, arguing that giving third-party developers access would be a security risk. This is an argument that Apple has used before as a reason why it doesn’t open up its platform, such as in the case of third-party app stores.
Engadget has reached out to Apple for comment on the lawsuit and will update if we hear back.
Imagine if you, me and a dozen other people were standing in a room staring at the same screen—but the screen showed something different to each of us, simultaneously.
A California-based tech company called Misapplied Sciences has made this possible. They’ve developed a “parallel reality” display “enabled by a new pixel that has unprecedented capabilities,” they write. “These pixels can simultaneously project up to millions of light rays of different colors and brightness. Each ray can then be software-directed to a specific person.”
They’ve partnered with Delta Airlines, who will be installing a parallel reality display at Detroit Metropolitan Airport this month. Customers who opt in to using it, either by scanning their boarding pass or by enrolling in Delta’s app-based facial recognition program (no thanks!) will look at the screen and see only the flight and baggage claim information relevant to their trip. A person standing five feet away will see nothing but their own information.
Up to 100 viewers can be accommodated by the single screen. Delta refers to the technology as “mind-bending” and states that the display will be in Concourse A of the McNamara Terminal starting on June 29th.
While we were just discussing how everyone occasionally gets reminded that for many digital goods these days you simply don’t actually own what you’ve bought, all thanks to Sony disappearing a bunch of purchased movies and shows from its PlayStation platform, this conversation has been going on for a long, long time. Whereas the expectation by many people is that buying a digital good carries similar ownership rights as it would a physical good, instead there are discussions of “licensing” buried in the Ts and Cs that almost nobody reads. The end result is a massive disconnect between what people think they’re paying for and what they actually are paying for.
Take Ubisoft DLC for instance. Lots of people bought DLC for titles like Assassin’s Creed 3 or Far Cry 3 for the PC versions of those games… and recently found out that all that purchased DLC is simply going away with Ubisoft shutting game servers down.
According to Ubisoft’s announcement, “the installation and access to downloadable content (DLC) will be unavailable” on the PC versions of the following games as of September 1, 2022:
Assassin’s Creed 3 Assassin’s Creed: Brotherhood Driver San Francisco Far Cry 3 Prince of Persia: The Forgotten Sands Silent Hunter 5
DLC for the console versions of these games (which is verified through the console platform stores and not Ubisoft’s UPlay platform) will be unaffected, when applicable. Assassin’s Creed III and Far Cry 3 are also available on PC in remastered re-releases that will not be affected by this server shutdown (though the remastered “Classic Edition” of Far Cry 3 is currently unavailable for purchase from Ubisoft’s own website).
A notable addition to all of this is that the full version of Assassin’s Creed Liberation HD was on sale merely days ago on Steam’s Summer Sale, but that title is going to disappear from Steam entirely on September 1st as well. Read that again. The public bought a game title on Steam for 75% off, thinking it was a great deal, only to subsequently learn that they have 60 days to play the damned thing before it becomes unplayable.
This is not tenable. The consumer can only be jerked around so much before a clapback occurs and losing purchased assets based on the whim of the company that sold them isn’t going to be tolerated forever. And while I’m loathe to be one of the “there should be a law!” guys, well, there should be legal ramifications for this sort of thing. There are other options out there that would not remove purchased items from people, be it local installations, allowing fans in the public to host their own servers, etc.
Instead, Ubisoft appears to be joining a list of companies that believes it can sell you something and then take it away, all while including that same something in some bundled release afterwards.
The research paper explains the cloning process, which requires physical access to the hardware. To achieve the hack, the Nordic nRF52832 inside the AirTag must be voltage glitched to enable its debug port. The researchers were able to achieve this with relatively simple tools, using a Pi Pico fitted with a few additional components.
With the debug interface enabled, it’s simple to extract the microcontroller’s firmware. It’s then possible to clone this firmware onto another tag. The team also experimented with other hacks, like having the AirTag regularly rotate its ID to avoid triggering anti-stalking warnings built into Apple’s tracing system.
As the researchers explain, it’s clear that AirTags can’t really be secure as long as they’re based on a microcontroller that is vulnerable to such attacks. It’s not the first AirTag cloning we’ve seen either. They’re an interesting device with some serious privacy and safety implications, so it pays to stay abreast of developments in this area.
“The vulnerabilities,” explained the ESET Research team, “can be exploited to achieve arbitrary code execution in the early phases of the platform boot, possibly allowing the attackers to hijack the OS execution flow and disable some important security features.”
“It’s a typical UEFI ‘double GetVariable’ vulnerability,” the team added, before giving a hat tip to efiXplorer.
Lenovo has published an advisory on the matter this week: the CVE identifiers are CVE-2022-1890, CVE-2022-1891, CVE-2022-1892. All are related to buffer overflows and carry the risk that an attacker with local privileges will be able to execute arbitrary code. Their severity was rated as medium.
As for mitigation, updating the firmware is pretty much all customers can do, although not all products are affected by all three vulnerabilities. All of the products, however, do seem to be hit by CVE-2022-1892, a buffer overflow in the SystemBootManagerDxe driver.
The disclosure follows another three vulnerabilities patched in April, also concerned with UEFI on Lenovo kit. UEFI, or Unified Extensible Firmware Interface, is the glue connecting a device’s firmware with the operating system on top. A vulnerability there could potentially be exploited before a device gets a chance to boot its operating system and fire up malware protections, allowing the computer to become deeply infected and compromised.
ESET research noted that the flaws were a result of “insufficient validation of DataSize parameter passed to the UEFI Runtime Services function GetVariable.”
These vulnerabilities were caused by insufficient validation of DataSize parameter passed to the UEFI Runtime Services function GetVariable. An attacker could create a specially crafted NVRAM variable, causing buffer overflow of the Data buffer in the second GetVariable call. 3/6 pic.twitter.com/HC5ow6KTN0
ThinkPad hardware is not affected, probably to the relief of harassed enterprise administrators around the world. Other Lenovo device users should check the list and perform a firmware update if needed.
Nokia T10 tablet has been officially launched by the company via a press release. It is the second tablet by Nokia’s new home, HMD Global, on the market. The device is being touted as a sturdy and portable Android slate with multiple years of software upgrades. The Nokia T10 has arrived as a mid-range Android tablet for global markets.
Specifications, Features
The Nokia T10 tablet comes with an 8-inch HD display. The slate boots Android 12 out-of-the-box. It will be getting two years of major Android OS updates and at least three years of monthly security updates for Android. The slate is powered by the Unisoc T606 processor, which is accompanied by up to 4GB of RAM and 64GB of internal storage. There also are dual stereo speakers with OZO playback to provide an immersive media experience.
[…]
The device has an 8MP primary shooter and a 2MP selfie camera, which supports face unlock functionality. In the connectivity department, the Nokia T10 comes with 4G LTE, dual-band Wi-Fi, Bluetooth, GPS with GLONASS, and a built-in FM radio receiver.
Lastly, the slate is fuelled by a beefy 5,250 mAh battery, which supports 10W charging technology. Nokia T10
Price, Availability The Nokia T10 Android tablet’s base variant will be available from $159