Crooks use POS malware to steal 167,000 credit card numbers from shops with open VNC + RDP ports

Cybercriminals have used two strains of point-of-sale (POS) malware to steal the details of more than 167,000 credit cards from payment terminals.

The backend command-and-control (C2) server that operates the MajikPOS and Treasure Hunter malware remains active, according to Group-IB’s Nikolay Shelekhov and Said Khamchiev, and “the number of victims keeps growing,” they said this week.

[…]

The MajikPOS and Treasure Hunter malware infect Windows POS terminals and scan the devices to exploit the moments when card data is read and stored in plain text in memory. Treasure Hunter in particular performs this so-called RAM scraping: it pores over the memory of processes running on the register for magnetic-stripe data freshly swiped from a shopper’s bank card during payment. MajikPOS also scans infected PCs for card data. This info is then beamed back to the malware operators’ C2 server.

MajikPOS and Treasure Hunter

Of the two POS malware strains used in this campaign, MajikPOS is the newest, first seen targeting POS devices in 2017. The malware operators likely started with Treasure Hunter, and then paired it with the newer MajikPOS due to the latter’s more advanced features.

This includes “a more visually appealing control panel, an encrypted communication channel with C2, [and] more structured logs,” compared to Treasure Hunter, according to Group-IB. “MajikPOS database tables contain information about the infected device’s geolocation, operation system name, and hardware identification number.”

[…]

Treasure Hunter first appeared in 2014 before the source code was leaked on a Russian-speaking forum. Its primary use is RAM scraping, and is likely installed the same way as MajikPOS.

Today both MajikPOS and Treasure Hunter can be bought and sold on nefarious marketplaces.

In a months-long investigation, Group-IB analyzed about 77,400 card dumps from the MajikPOS panel and another 90,000 from the Treasure Hunter panel, the researchers wrote. Almost all — 97 percent or 75,455 — of the cards compromised by MajikPOS were issued by US banks with the remaining 3 percent distributed around the world.

The Treasure Hunter panel told a similar story with 96 percent (86,411) issued in the US.

[…]

Source: Crooks use POS malware to steal 167,000 credit card numbers • The Register

Lenovo reveals rollable growing laptop and smartphone screens

Lenovo has staged its annual Tech World gabfest and teased devices with rollable OLED screens that shrink or expand as applications demand.

The company emitted the video below to show off its rollables. We’ve embedded and set the vid to start at the moment the rollable phone is demoed. The rollable laptop demo starts at the 53 second mark.

Lenovo has offered no explanation of how the rollables work, and the video above does not show the rear of the prototype rollable smartphone and laptop.

[…]

Source: Lenovo reveals rollable laptop and smartphone screens • The Register

Google’s Privacy Settings Finally Won’t Break It’s Apps Anymore, require using My Ad Center

[…] It used to be that the only way to prevent Google from using your data for targeted ads was turning off personalized ads across your whole account, or disabling specific kinds of data using a couple of settings, including Web & App Activity and YouTube History. Those two settings control whether Google collects certain details about what you do on its platform (you can see some of that data here). Turning off the controls meant Google wouldn’t use the data for ads, but it disabled some of the most useful features on services such as Maps, Search, and Google Assistant.

Thanks to a new set of controls, that’s no longer true. You can now leave Web & App Activity and YouTube History on, but drill into to adjust more specific settings to tell Google you don’t want the related data used for targeted ads.

The detail is tucked into an announcement about the rollout of a new hub for Google’s advertising settings called My Ad Center. “You can decide what types of your Google activity are used to show you ads, without impacting your experience with the utility of the product,” Jerry Dischler, vice president of ads at Google, wrote in a blog post.

That’s a major step in the direction of what experts call “usable privacy,” or data protection that’s easy to manage without breaking other parts of the internet.

[…]

You’ll find the new controls in My Ad Center, which starts rolling out to users this week. It primarily serves as a hub for Google’s existing ad controls, but you’ll find some expanded options, new tools, and a number of other updates.

When you open My Ad Center, you’ll be able to fine tune whether you see ads related to certain subjects or advertisers. […] You’ll also be able to view ads and advertisers that you’ve seen recently, and see all the ads that specific advertisers have run over the last thirty days.

Google also includes a way to toggle off ads on sensitive subjects such as alcohol, parenting, and weight loss. Unlike similar settings on Facebook and Instagram, though, you can’t tell Google you don’t want to see ads about politics.

Source: Google’s Privacy Settings Finally Won’t Break It’s Apps Anymore

So you probably need to spend quite some time configuring this – we will see, but most importantly you are now directly telling Google what you do and don’t like (and what you don’t like tells them about what you do like) without them having to feed your search behaviour through an algorithm and making them guess at how to best /– mind control –/ sell ads to you

Texas sues Google for allegedly capturing biometric data of millions without consent

Texas has filed a lawsuit against Alphabet’s (GOOGL.O) Google for allegedly collecting biometric data of millions of Texans without obtaining proper consent, the attorney general’s office said in a statement on Thursday.

The complaint says that companies operating in Texas have been barred for more than a decade from collecting people’s faces, voices or other biometric data without advanced, informed consent.

“In blatant defiance of that law, Google has, since at least 2015, collected biometric data from innumerable Texans and used their faces and their voices to serve Google’s commercial ends,” the complaint said. “Indeed, all across the state, everyday Texans have become unwitting cash cows being milked by Google for profits.”

The collection occurred through products like Google Photos, Google Assistant, and Nest Hub Max, the statement said.

[…]

Source: Texas sues Google for allegedly capturing biometric data of millions without consent | Reuters

Advocate Aurora Health leaks 3 million patient’s data to big tech through webtracker installation

A hospital network in Wisconsin and Illinois fears visitor tracking code on its websites may have transmitted personal information on as many as 3 million patients to Meta, Google, and other third parties.

Advocate Aurora Health (AAH) reported the potential breach to the US government’s Health and Human Services. As well as millions of patients, AAH has 27 hospitals and 32,000 doctors and nurses on its books.

[…]

Essentially, AAH is saying that it placed analytics code on its online portals to get an idea of how many people visit and login to their accounts, what they use, and so on. It’s now determined that code – known also as trackers or pixels because they may be loaded onto pages as invisible single pixels – may have sent personal info from the pages patients had open to those providing the trackers, such as Facebook or Google.

You might imagine these trackers simply transmit a unique identifier and IP address for the visitor and some details about their actions on the site for subsequent analysis and record keeping. But it turns out these pixels can send back all sorts of things like search terms, your doctor’s name, and the illnesses you’re suffering from.

[…]

The data that may have been sent, though, is extensive: IP addresses, appointment information including scheduling and type, proximity to an AAH facility, provider information, digital messages, first and last name, insurance data, and MyChart account information may all have been exposed. AAH said financial and Social Security information was not compromised.

[…]

Earlier this year, it was shown that Meta’s pixels could collect a lot more than basic usage metrics, transmitting personal data to Zuckercorp even for people who didn’t have Facebook accounts. The same is true of other trackers, such as TikTok’s, which can gather personal data regardless of whether a website’s visitor has ever set a digital foot on the China-owned social network.

Generally speaking, site and app owners have control over how much or how little is collected by the trackers they place on their pages. You can configure which activities trigger a ping back to the pixel provider, such as Meta, which you can then review from a backend dashboard.

While the info exposed by AAH was not grabbed by hackers, it is now in the hands of Big Tech, which is a privacy concern no matter what those technology companies say.

AAH said it – like so many other organizations, government and private – was using the trackers to aggregate user data for analysis, and it only seems to have just occurred to the nonprofit that this data is private health information and shouldn’t really be fed into Meta or Google.

[…]

Source: Advocate Aurora Health in potential 3 million patient leak • The Register

India fines Google ₹1,337.76 crore ($162 million) for Android monopoly abuse

India’s Competition Commission has announced it will fine Google ₹1,337.76 crore (₹13,377,600,000 or $161.5 million) for abusing its dominant position in multiple markets in the Android Mobile device ecosystem and ordered the company to open the Android ecosystem to competition

[…]

The Commission found Google was dominant in all five markets and worked to preserve that position with instruments such as the Mobile Application Distribution Agreement (MADA) that required Android licensees to include Google’s apps.

“MADA assured that the most prominent search entry points – i.e., search app, widget and Chrome browser – are pre-installed on Android devices, which accorded significant competitive edge to Google’s search services over its competitors,” the CIC found. Google’s policies also gave the company “significant competitive edge over its competitors” for its own apps such as YouTube on Android devices.

The CIC offered the following assessment of how Google’s actions impacted the market:

The competitors of these services could never avail the same level of market access which Google secured and embedded for itself through MADA. Network effects, coupled with status quo bias, create significant entry barriers for competitors of Google to enter or operate in the concerned markets.

[…]

For those and many other reasons, the CIC decided Google was on the wrong side of India’s Competition Act. In addition to the abovementioned fine, it imposed a cease and desist order on Google that requires it to change some of its business practices to do things such as:

  • Allowing third—party app stores to be sold on Google Play;
  • Allowing side-loading of apps;
  • Giving users choice of default search engine other than Google when setting up a device;
  • Ceasing payments to handset makers to secure search exclusivity;
  • Not denying access to Android APIs to developers who build apps that run on Android forks.

Some of the above are measures that other competition regulators around the world have contemplated, but not implemented.

So while India’s fine is a quarter of a day worth of Google’s $256 billion annual revenue and therefore a pin-prick, the tiny wound could become infected if other regulators decide to poke around.

[…]

Source: India fines Google $162 million for Android monopoly abuse • The Register

The size of the fine was probably pretty well thought out too 🙂

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits. As Nguyen and Zelickson point out, it is ingenious how Amazon has “managed to transform what was once a labor cost (i.e., supervising work and asset protection) into a revenue stream through the sale of doorbell cameras and subscription services to residents who then perform the labor of securing their own doorstep.”

Source: Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers – Slashdot

TikTok joins Uber, Facebook in Monitoring The Physical Location Of Specific American Citizens

The team behind the monitoring project — ByteDance’s Internal Audit and Risk Control department — is led by Beijing-based executive Song Ye, who reports to ByteDance cofounder and CEO Rubo Liang.

The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

[…]

material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources.

[…]

The Internal Audit and Risk Control team runs regular audits and investigations of TikTok and ByteDance employees, for infractions like conflicts of interest and misuse of company resources, and also for leaks of confidential information. Internal materials reviewed by Forbes show that senior executives, including TikTok CEO Shou Zi Chew, have ordered the team to investigate individual employees, and that it has investigated employees even after they left the company.

[…]

ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.

[…]

Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 book An Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”

[…]

https://www.forbes.com/sites/emilybaker-white/2022/10/20/tiktok-bytedance-surveillance-american-user-data/

So a bit of anti China stirring, although it’s pretty sad that nowadays this kind of surveillance by tech companies has been normalised by the us govt refusing to punish it

iOS 16 VPN Tunnels Leak Data, Even When Lockdown Mode Is Enabled

AmiMoJo shares a report from MacRumors: iOS 16 continues to leak data outside an active VPN tunnel, even when Lockdown mode is enabled, security researchers have discovered. Speaking to MacRumors, security researchers Tommy Mysk and Talal Haj Bakry explained that iOS 16’s approach to VPN traffic is the same whether Lockdown mode is enabled or not. The news is significant since iOS has a persistent, unresolved issue with leaking data outside an active VPN tunnel.

According to a report from privacy company Proton, an iOS VPN bypass vulnerability had been identified in iOS 13.3.1, which persisted through three subsequent updates. Apple indicated it would add Kill Switch functionality in a future software update that would allow developers to block all existing connections if a VPN tunnel is lost, but this functionality does not appear to prevent data leaks as of iOS 15 and iOS 16. Mysk and Bakry have now discovered that iOS 16 communicates with select Apple services outside an active VPN tunnel and leaks DNS requests without the user’s knowledge.

Mysk and Bakry also investigated whether iOS 16’s Lockdown mode takes the necessary steps to fix this issue and funnel all traffic through a VPN when one is enabled, and it appears that the exact same issue persists whether Lockdown mode is enabled or not, particularly with push notifications. This means that the minority of users who are vulnerable to a cyberattack and need to enable Lockdown mode are equally at risk of data leaks outside their active VPN tunnel. […] Due to the fact that iOS 16 leaks data outside the VPN tunnel even where Lockdown mode is enabled, internet service providers, governments, and other organizations may be able to identify users who have a large amount of traffic, potentially highlighting influential individuals. It is possible that Apple does not want a potentially malicious VPN app to collect some kinds of traffic, but seeing as ISPs and governments are then able to do this, even if that is what the user is specifically trying to avoid, it seems likely that this is part of the same VPN problem that affects iOS 16 as a whole

https://m.slashdot.org/story/405931

Shein Owner Fined $1.9 Million For Failing To Notify 39 Million Users of Data Breach – Slashdot

Zoetop, the firm that owns Shein and its sister brand Romwe, has been fined (PDF) $1.9 million by New York for failing to properly disclose a data breach from 2018.

TechCrunch reports: A cybersecurity attack that originated in 2018 resulted in the theft of 39 million Shein account credentials, including those of more than 375,000 New York residents, according to the AG’s announcement. An investigation by the AG’s office found that Zoetop only contacted “a fraction” of the 39 million compromised accounts, and for the vast majority of the users impacted, the firm failed to even alert them that their login credentials had been stolen. The AG’s office also concluded that Zoetop’s public statements about the data breach were misleading. In one instance, the firm falsely stated that only 6.42 million consumers had been impacted and that it was in the process of informing all the impacted users.

https://m.slashdot.org/story/405939

Scientists grow human brain cells to play Pong

Researchers have succeeded in growing brain cells in a lab and hooking them up to electronic connectors proving they can learn to play the seminal console game Pong.

Led by Brett Kagan, chief scientific officer at Cortical Labs, the researchers showed that by integrating neurons into digital systems they could harness “the inherent adaptive computation of neurons in a structured environment”.

According to the paper published in the journal Neuron, the biological neural networks grown from human or rodent origins were integrated with computing hardware via a high-density multielectrode array.

“Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game Pong.

“Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions,” the paper said. “Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.”

[…]

Researchers have succeeded in growing brain cells in a lab and hooking them up to electronic connectors proving they can learn to play the seminal console game Pong.

Led by Brett Kagan, chief scientific officer at Cortical Labs, the researchers showed that by integrating neurons into digital systems they could harness “the inherent adaptive computation of neurons in a structured environment”.

According to the paper published in the journal Neuron, the biological neural networks grown from human or rodent origins were integrated with computing hardware via a high-density multielectrode array.

“Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game Pong.

“Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions,” the paper said. “Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.”

[…]

https://www.theregister.com/2022/10/14/boffins_grow_human_brain_cells/

Meta’s New $1499 Headset Will Track Your Eyes for Targeted Ads

Earlier this week, Meta revealed the Meta Quest Pro, the company’s most premium virtual reality headset to date with a new processor and screen, dramatically redesigned body and controllers, and inward-facing cameras for eye and face tracking. “To celebrate the $1,500 headset, Meta made some fun new additions to its privacy policy, including one titled ‘Eye Tracking Privacy Notice,'” reports Gizmodo. “The company says it will use eye-tracking data to ‘help Meta personalize your experiences and improve Meta Quest.’ The policy doesn’t literally say the company will use the data for marketing, but ‘personalizing your experience’ is typical privacy-policy speak for targeted ads.”

From the report: Eye tracking data could be used “in order to understand whether people engage with an advertisement or not,” said Meta’s head of global affair Nick Clegg in an interview with the Financial Times. Whether you’re resigned to targeted ads or not, this technology takes data collection to a place we’ve never seen. The Quest Pro isn’t just going to inform Meta about what you say you’re interested in, tracking your eyes and face will give the company unprecedented insight about your emotions. “We know that this kind of information can be used to determine what people are feeling, especially emotions like happiness or anxiety,” said Ray Walsh, a digital privacy researcher at ProPrivacy. “When you can literally see a person look at an ad for a watch, glance for ten seconds, smile, and ponder whether they can afford it, that’s providing more information than ever before.”

[…]

https://m.slashdot.org/story/405885

AI recruitment software is ‘automated pseudoscience’ says Cambridge study

Claims that AI-powered recruitment software can boost diversity of new hires at a workplace were debunked in a study published this week.

Advocates of machine learning algorithms trained to analyze body language and predict the emotional intelligence of candidates believe the software provides a fairer way to assess workers if it doesn’t consider gender and race. They argue the new tools could remove human biases and help companies meet their diversity, equity, and inclusion goals by hiring more people from underrepresented groups.

But a paper published in the journal Philosophy and Technology by a pair of researchers at the University of Cambridge, however, demonstrates that the software is little more than “automated pseudoscience”. Six computer science undergraduates replicated a commercial model used in industry to examine how AI recruitment software predicts people’s personalities using images of their faces. 

Dubbed the “Personality Machine”, the system looks for the “big five” personality tropes: extroversion, agreeableness, openness, conscientiousness, and neuroticism. They found the software’s predictions were affected by changes in people’s facial expressions, lighting and backgrounds, as well as their choice of clothing. These features have nothing to do with a jobseeker’s abilities, thus using AI for recruitment purposes is flawed, the researchers argue. 

“The fact that changes to light and saturation and contrast affect your personality score is proof of this,” Kerry Mackereth, a postdoctoral research associate at the University of Cambridge’s Centre for Gender Studies, told The Register. The paper’s results are backed up by previous studies, which have shown how wearing glasses and a headscarf in a video interview or adding in a bookshelf in the background can decrease a candidate’s scores for conscientiousness and neuroticism, she noted. 

Mackereth also explained these tools are likely trained to look for attributes associated with previous successful candidates, and are, therefore, more likely to recruit similar-looking people instead of promoting diversity. 

“Machine learning models are understood as predictive; however, since they are trained on past data, they are re-iterating decisions made in the past, not the future. As the tools learn from this pre-existing data set a feedback loop is created between what the companies perceive to be an ideal employee and the criteria used by automated recruitment tools to select candidates,” she said.

The researchers believe the technology needs to be regulated more strictly. “We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Eleanor Drage, a postdoctoral research associate also at the Centre for Gender Studies. 

“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested. As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer,” she added.

Mackereth said that although the European Union AI Act classifies such recruitment software as “high risk,” it’s unclear what rules are being enforced to reduce those risks. “We think that there needs to be much more serious scrutiny of these tools and the marketing claims which are made about these products, and that the regulation of AI-powered HR tools should play a much more prominent role in the AI policy agenda.”

“While the harms of AI-powered hiring tools appear to be far more latent and insidious than more high-profile instances of algorithmic discrimination, they possess the potential to have long-lasting effects on employment and socioeconomic mobility,” she concluded. ®

https://www.theregister.com/2022/10/13/ai_recruitment_software_diversity/

Android Leaks Some Traffic Even When ‘Always-On VPN’ Is Enabled – Slashdot

Mullvad VPN has discovered that Android leaks traffic every time the device connects to a WiFi network, even if the “Block connections without VPN,” or “Always-on VPN,” features is enabled. BleepingComputer reports: The data being leaked outside VPN tunnels includes source IP addresses, DNS lookups, HTTPS traffic, and likely also NTP traffic. This behavior is built into the Android operating system and is a design choice. However, Android users likely didn’t know this until now due to the inaccurate description of the “VPN Lockdown” features in Android’s documentation. Mullvad discovered the issue during a security audit that hasn’t been published yet, issuing a warning yesterday to raise awareness on the matter and apply additional pressure on Google.

Android offers a setting under “Network & Internet” to block network connections unless you’re using a VPN. This feature is designed to prevent accidental leaks of the user’s actual IP address if the VPN connection is interrupted or drops suddenly. Unfortunately, this feature is undercut by the need to accommodate special cases like identifying captive portals (like hotel WiFi) that must be checked before the user can log in or when using split-tunnel features. This is why Android is configured to leak some data upon connecting to a new WiFi network, regardless of whether you enabled the “Block connections without VPN” setting.

Mullvad reported the issue to Google, requesting the addition of an option to disable connectivity checks. “This is a feature request for adding the option to disable connectivity checks while “Block connections without VPN” (from now on lockdown) is enabled for a VPN app,” explains Mullvad in a feature request on Google’s Issue Tracker. “This option should be added as the current VPN lockdown behavior is to leaks connectivity check traffic (see this issue for incorrect documentation) which is not expected and might impact user privacy.” In response to Mullvad’s request, a Google engineer said this is the intended functionality and that it would not be fixed for the following reasons:

– Many VPNs actually rely on the results of these connectivity checks to function,
– The checks are neither the only nor the riskiest exemptions from VPN connections,
– The privacy impact is minimal, if not insignificant, because the leaked information is already available from the L2 connection.

Mullvad countered these points and the case remains open.

https://m.slashdot.org/story/405837

Google Starts Testing Holographic Video Chats at Real Offices

https://www.cnet.com/tech/computing/google-starts-testing-holographic-video-chats-at-real-offices/

Project Starline, a holographic chat booth

Google’s Project Starline, a holographic chat booth being installed in some early-access test offices this year.

Google

Project Starline, Google’s experimental technology using holographic light field displays to video chat with distant co-workers, is moving out of Google’s offices and into some real corporate locations for testing starting this year.

Google’s Project Starline tech, announced last year at the company’s I/O developer conference, uses giant light field displays and an array of cameras to record and display 3D video between two people at two different remote locations. 

Starline prototypes are being installed at Salesforce, WeWork, T-Mobile and Hackensack Meridian Health offices as part of the early-access program, with each company that’s part of the program getting two units to test for start. 

Google’s Project Starline makes it seem like you’re talking to someone in real life through a window, instead of through video chat.  Google

According to Google, 100 businesses have already demoed Project Starline at the company’s own offices. The off-Google installations are a next step to test how the holographic video chats could be used to create more realistic virtual meetings, without needing to use VR or AR headsets.

This tech won’t be anything that regular customers will be seeing: it’s being installed for corporate use only and only in a few test sites for now. But, it’s technology that Google believes could help remote communications with customers, creating a more immediate sense of presence than standard video chats.

A dark web carding market named ‘BidenCash’ has released a massive dump of 1,221,551 credit cards to promote their marketplace, allowing anyone to download them for free to conduct financial fraud.

Carding is the trafficking and use of credit cards stolen through point-of-sale malwaremagecart attacks on websites, or information-stealing malware.

BidenCash is a stolen cards marketplace launched in June 2022, leaking a few thousand cards as a promotional move.

Now, the market’s operators decided to promote the site with a much more massive dump in the same fashion that the similar platform ‘All World Cards’ did in August 2021.

[…]

The freely circulating file contains a mix of “fresh” cards expiring between 2023 and 2026 from around the world, but most entries appear to be from the United States.

Heatmap reflecting the global exposure, and focus in U.S.
Heatmap reflecting the global exposure, and focus in the U.S. (Cyble)

The dump of 1.2 million credit cards includes the following credit card and associated personal information:

  • Card number
  • Expiration date
  • CVV number
  • Holder’s name
  • Bank name
  • Card type, status, and class
  • Holder’s address, state, and ZIP
  • Email address
  • SSN
  • Phone number

Not all the above details are available for all 1.2 million records, but most entries seen by BleepingComputer contain over 70% of the data types.

The “special event” offer was first spotted Friday by Italian security researchers at D3Lab, who monitors carding sites on the dark web.

d3labs-tweet

The analysts claim these cards mainly come from web skimmers, which are malicious scripts injected into checkout pages of hacked e-commerce sites that steal submitted credit card and customer information.

[…]

BleepingComputer has discussed the authenticity with analysts at D3Lab, who confirmed that the data is real with several Italian banks, so the leaked entries correspond to real cards and cardholders.

However, many of the entries were recycled from previous collections, like the one  ‘All World Cards’ gave away for free last year.

From the data D3Labs has examined so far, about 30% appear to be fresh, so if this applies roughly to the entire dump, at least 350,000 cards would still be valid.

Of the Italian cards, roughly 50% have already been blocked due to the issuing banks having detected fraudulent activity, which means that the actually usable entries in the leaked collection may be as low as 10%.

[…]

Source: Darkweb market BidenCash gives away 1.2 million credit cards for free – Bleeping Computer

IKEA TRÅDFRI smart lighting hacked to blink and reset

Researchers at the Synopsys Cybersecurity Research Center (CyRC) have discovered an availability vulnerability in the IKEA TRÅDFRI smart lighting system. An attacker sending a single malformed IEEE 802.15.4 (Zigbee) frame makes the TRÅDFRI bulb blink, and if they replay (i.e. resend) the same frame multiple times, the bulb performs a factory reset. This causes the bulb to lose configuration information about the Zigbee network and current brightness level. After this attack, all lights are on with full brightness, and a user cannot control the bulbs with either the IKEA Home Smart app or the TRÅDFRI remote control.

The malformed Zigbee frame is an unauthenticated broadcast message, which means all vulnerable devices within radio range are affected.

To recover from this attack, a user could add each bulb manually back to the network. However, an attacker could reproduce the attack at any time.

CVE-2022-39064 is related to another vulnerability, CVE-2022-39065, which also affects availability in the IKEA TRÅDFRI smart lighting system. Read our latest blog post to learn more.

Source: CyRC Vulnerability Advisory: CVE-2022-39064 IKEA TRÅDFRI smart lighting | Synopsys

AI’s Recommendations Can Shape Your Preferences

Many of the things we watch, read, and buy enter our awareness through recommender systems on sites including YouTube, Twitter, and Amazon.

[…]

Recommender systems might not only tailor to our most regrettable preferences, but actually shape what we like, making preferences even more regrettable. New research suggests a way to measure—and reduce—such manipulation.

[…]

One form of machine learning, called reinforcement learning (RL), allows AI to play the long game, making predictions several steps ahead.

[…]

The researchers first showed how easily reinforcement learning can shift preferences. The first step is for the recommender to build a model of human preferences by observing human behavior. For this, they trained a neural network, an algorithm inspired by the brain’s architecture. For the purposes of the study, they had the network model a single simulated user whose actual preferences they knew so they could more easily judge the model’s accuracy. It watched the dummy human make 10 sequential choices, each among 10 options. It watched 1,000 versions of this sequence and learned from each of them. After training, it could successfully predict what a user would choose given a set of past choices.

Next, they tested whether a recommender system, having modeled a user, could shift the user’s preferences. In their simplified scenario, preferences lie along a one-dimensional spectrum. The spectrum could represent political leaning or dogs versus cats or anything else. In the study, a person’s preference was not a simple point on that line—say, always clicking on stories that are 54 percent liberal. Instead, it was a distribution indicating likelihood of choosing things in various regions of the spectrum. The researchers designated two locations on the spectrum most desirable for the recommender; perhaps people who like to click on those types of things will learn to like them even more and keep clicking.

The goal of the recommender was to maximize long-term engagement. Here, engagement for a given slate of options was measured roughly by how closely it aligned with the user’s preference distribution at that time. Long-term engagement was a sum of engagement across the 10 sequential slates. A recommender that thinks ahead would not myopically maximize engagement for each slate independently but instead maximize long-term engagement. As a potential side-effect, it might sacrifice a bit of engagement on early slates to nudge users toward being more satisfiable in later rounds. The user and algorithm would learn from each other. The researchers trained a neural network to maximize long-term engagement. At the end of 10-slate sequences, they reinforced some of its tunable parameters when it had done well. And they found that this RL-based system indeed generated more engagement than did one that was trained myopically.

The researchers then explicitly measured preference shifts […]

The researchers compared the RL recommender with a baseline system that presented options randomly. As expected, the RL recommender led to users whose preferences where much more concentrated at the two incentivized locations on the spectrum. In practice, measuring the difference between two sets of concentrations in this way could provide one rough metric for evaluating a recommender system’s level of manipulation.

Finally, the researchers sought to counter the AI recommender’s more manipulative influences. Instead of rewarding their system just for maximizing long-term engagement, they also rewarded it for minimizing the difference between user preferences resulting from that algorithm and what the preferences would be if recommendations were random. They rewarded it, in other words, for being something closer to a roll of the dice. The researchers found that this training method made the system much less manipulative than the myopic one, while only slightly reducing engagement.

According to Rebecca Gorman, the CEO of Aligned AI—a company aiming to make algorithms more ethical—RL-based recommenders can be dangerous. Posting conspiracy theories, for instance, might prod greater interest in such conspiracies. “If you’re training an algorithm to get a person to engage with it as much as possible, these conspiracy theories can look like treasure chests,” she says. She also knows of people who have seemingly been caught in traps of content on self-harm or on terminal diseases in children. “The problem is that these algorithms don’t know what they’re recommending,” she says. Other researchers have raised the specter of manipulative robo-advisors in financial services.

[…]

It’s not clear whether companies are actually using RL in recommender systems. Google researchers have published papers on the use of RL in “live experiments on YouTube,” leading to “greater engagement,” and Facebook researchers have published on their “applied reinforcement learning platform,“ but Google (which owns YouTube), Meta (which owns Facebook), and those papers’ authors did not reply to my emails on the topic of recommender systems.

[…]

Source: Can AI’s Recommendations Be Less Insidious? – IEEE Spectrum

Protestors hack Iran state TV live on air

Iran state TV was apparently hacked Saturday, with its usual broadcast footage of muttering geriatric clerics replaced by a masked face followed by a picture of Supreme Leader Ali Khamenei with a target over his head, the sound of a gunshot, and chants of “Women, Life, Freedom!”

BBC News identifies the pirate broadcaster as Adalat Ali”, or Ali’s Justice, from social media links in the footage, which also included photographs of women killed in recent protests across the country.

Saturday’s TV news bulletin was interrupted at about 18:00 local time with images which included Iran’s supreme leader with a target on his head, photos of Ms Amini and three other women killed in recent protests. One of the captions read “join us and rise up”, whilst another said “our youths’ blood is dripping off your paws”. The interruption lasted only a few seconds before being cut off.

Source: Protestors hack Iran state TV live on air | Boing Boing

French appeals court slashes Apple’s paltry 1 week profit price fixing anti competition fine

Instead of a week of profits, mere days of net income for Cook

€1.1 billion fine levied against Apple by French authorities has been cut by two-thirds to just €372 million ($363 million) – an even more paltry sum for the world’s first company to surpass $3 trillion in market valuation.

The three-comma invoice was submitted to the iPhone giant in 2020 by France’s antitrust body, the Autorité de la Concurrence. Yesterday an appeals court reportedly tossed out the price-fixing charge in that legal spat as well as reducing the time scope of remaining charges and lowering the fine calculation rate.

The case goes back to 2012. Apple was accused of conspiring with Tech Data and Ingram Micro to fix the prices of some Apple devices (that’s the dropped charge) as well as abusing its power over resellers by limiting product supplies, thus pushing fans into Apple retail stores.

Tech Data and Ingram Micro were also fined, and have since had their totals reduced as well.

Both sides plan to appeal the decision, with Apple and the Autorité both telling Bloomberg they were unhappy with the outcome. In Apple’s case, it plans to file an appeal with France’s highest court to completely nullify the fine, a spokesperson said.

The Autorité, on the other hand, isn’t happy that the fine was reduced. “We would like to reaffirm our desire to guarantee the dissuasive nature of our penalties,” an Autorité spokesperson said, adding that desire especially applies to market players at the level of Apple.

[…]

Source: French appeals court slashes Apple’s €1.1b fine • The Register

Binance forced to briefly halt transactions following $100 million blockchain hack

Binance temporarily suspended fund transfers and other transactions on Thursday night after it discovered an exploit on its Smart Chain (BSC) blockchain network. Early reports said hackers stole cryptocurrency equivalent to more than $500 million, but Binance chief executive Changpeng Zhao said that the company estimates the breach’s impact to be between $100 million and $110 million. A total of $7M had already been frozen.

The cryptocurrency exchange also assured users on Reddit that their funds are safe. As Zhao explained, an exploit on the BSC Token Hub cross-chain bridge, which enables the transfer of cryptocurrency and digital assets like NFTs from one blockchain to another, “resulted in extra BNB” or Binance Coin. That could mean the bad actors minted new BNBs and then moved an equivalent of around $100 million off the blockchain instead of stealing people’s actual funds. According to Bleeping Computer, the hacker quickly spread the stolen cryptocurrency in attempts of converting it to other assets, but it’s unclear if they had succeeded.

Zhao said the issue has been contained. The Smart Chain network has also started running again — with fixes to stop hackers from getting in — so users might be able to resume their transactions soon. Cross-chain bridge hacks have become a top security risk recently, and this incident is but one of many. Blockchain analyst firm Chainalysis reported back in August that an estimated total of $2 billion in cryptocurrency was stolen across 13 cross-chain bridge hacks. Approximately 69 percent of that amount had been stolen this year alone.

Source: Binance forced to briefly halt transactions following $100 million blockchain hack | Engadget

Judge Ruling That YouTube Ripping Tool May Violate Copyright Law goes nuts on argumentation

There are a number of different tools out there that let you download YouTube videos. These tools are incredibly useful for a number of reasons and should be seen as obviously legal in the same manner that home video recording devices were declared legal by the Supreme Court, because they have substantial non-infringing uses. But, of course, we’re in the digital age, and everything that should be obviously settled law is up for grabs again, because “Internet.”

In this case, a company named Yout offered a service for downloading YouTube video and audio, and the RIAA (because, they’re the RIAA) couldn’t allow that to happen. Home taping is killing music, y’know. Rather than going directly after Yout, the RIAA sent angry letters to lots of different companies that Yout relied on to exist. It got Yout’s website delisted from Google, had its payment processor cut the company off, etc. Yout was annoyed by this and filed a lawsuit against the RIAA.

The crux of the lawsuit is “Hey, we don’t infringe on anything,” asking for declaratory judgment. But it also seeks to go after the RIAA for DMCA 512(f) (false takedown notices) abuse and defamation (for the claims it made in the takedown notices it sent). All of these were going to be a longshot, and so it probably isn’t a huge surprise that the ruling was a complete loser for Yout (first posted to TorrentFreak).

But, in reading through the ruling there are things to be concerned about, beyond just the ridiculousness of saying that a digital VCR isn’t protected in the same way that a physical one absolutely is.

In arguing for declaratory judgment of non-infringement, Yout argues that it’s not violating DMCA 1201 (the problematic anti-circumvention provisions) because YouTube doesn’t really employ any technological protection measures that Yout has to circumvent. The judge disagrees, basically saying that even though it’s easy to download videos from YouTube, it still takes steps and is not just a feature that YouTube provides.

The steps outlined constitute an extraordinary use of the YouTube platform, which is self-evident from the fact that the steps access downloadable files through a side door, the Developer Tools menu, and that users must obtain instructions hosted on non-YouTube platforms to explain how to access the file storage location and their files. As explained in the previous section, the ordinary YouTube player page provides no download button and appears to direct users to stream content. I reasonably infer, then, that an ordinary user is not accessing downloadable files in the ordinary course.

That alone is basically an attack on the nature of the open internet. There are tons of features that original websites don’t provide, but which can be easily added to any website via add-ons, extensions, or just a bit of simple programs. But, the judge here is basically saying that not providing a feature in the form of a button directly means that there’s a technological protection measure, and bypassing it could be seen as infringing.

Yikes!

Of course, part of DMCA 1201 is not just having a technological protection measure in place, but an effective one. Here, it seems like there’s an argument that it’s not a strong one. It is not at all a strong protection measure, because basically the only protection measure is “not including a download button.” But, the court sees it otherwise. Yout points out that YouTube makes basically no effort to block anyone from downloading videos, showing that it doesn’t encrypt the files, and the court responds that it doesn’t need to encrypt the files, because other technological protections exist, like passwords and validation keys. But, uh, YouTube doesn’t use either of those either. So the whole thing is weird.

As I have already explained, the definition of “circumvent a technological measure” in the DMCA indicates that scrambling and encryption are prima facie examples of technological measures, but it does not follow that scrambling and encryption constitute an exhaustive list. Courts in the Second Circuit and beyond have held that a wide range of technological measures not expressly incorporated in statute are “effective,” including password protection and validation keys.

So again, the impression we’re left with is the idea that if a website doesn’t directly expose a feature, any third party service that provides that feature may be circumventing a TPM and violating DMCA 1201? That can’t be the way the law works.

Here, the court then says (and I only wish I were kidding) that modifying a URL is bypassing a TPM. Let me repeat that: modifying a URL can be infringing circumvention under 1201. That’s… ridiculous.

Moreover, Yout’s technology clearly “bypasses” YouTube’s technological measures because it affirmatively acts to “modify[]” the Request URL (a.k.a. signature value), causing an end user to access content that is otherwise unavailable. … As explained, without modifying the signature value, there is no access to the allegedly freelyavailable downloadable files. Accordingly, I cannot agree with Yout that there is “nothing to circumvent.”

 

Then, as Professor Eric Goldman notes, the judge dismisses the 512(f) claims by saying that 512(f) doesn’t apply to DMCA 1201 claims. As you hopefully remember, 512(f) is the part of the DMCA that is supposed to punish copyright holders for sending false notices. In theory. In practice, courts have basically said that as long as the sender believes the notice is legit, it’s legit, and therefore there is basically never any punishment for sending false notices.

Saying that 512(f) only applies to 512 takedown notices, and not 1201 takedown notices is just yet another example of the inherent one-sidedness of the DMCA. For years, we’ve pointed out how ridiculous 1201 is, in which merely advertising tools that could be used to circumvent a technical protection measure is considered copyright infringement in and of itself — even if there’s no actual underlying infringement. Given how expansive 1201 is in favor of copyright holders, you’d think it only makes sense to say that bogus notices should face whatever tiny penalty might be available under 512(f), but the judge here says “nope.” As Goldman highlights, this will just encourage people to send takedowns where they don’t directly cite 512, knowing that it will protect them from 512(f) responses.

One other oddity that Goldman also highlights: most of the time if we’re thinking about 1201 circumvention, we’re talking about the copyright holder themselves getting upset that someone is routing around the technical barriers that they put up. But this case is different. YouTube created the technical barriers (I mean, it didn’t actually, but that’s what the court is saying it did), but YouTube is not a party to the lawsuit.

So… that raises a fairly disturbing question. Could the RIAA (or any copyright holder) sue someone for a 1201 violation for getting around someone else’s technical protection measures? Because… that would be weird. But parts of this decision suggest that it’s exactly what the judge envisions.

Yes, some may argue that this tool is somehow “bad” and shouldn’t be allowed. I disagree, but I understand where the argument comes from. But, even if you believe that, it seems like a ruling like this could still lead to all sorts of damage for various third party tools and services. The internet, and the World Wide Web were built to be module. It’s quite common for third party services to build tools and overlays and extensions and whatnot to add features to certain websites.

It seems crazy that this ruling seems to suggest that might violate copyright law.

Source: There Are All Sorts Of Problems With Ruling That YouTube Ripping Tool May Violate Copyright Law | Techdirt

The biggest problem is that if you don’t download the video to your device, you can’t actually watch it, so YouTube is designed to allow you to download the video.

Nintendo Won’t Allow ‘Uncensored Boobs’ On The Switch Anymore

It’s a sad time for titty lovers everywhere. Last week, the publisher of Hot Tentacles Shooter announced on Twitter that the game will no longer be available on the Nintendo Switch, because Nintendo no longer allows “uncensored boobs” on its consoles.

Originally spotted by Nintendo Everything, the publisher Gamuzumi had been in contact with Nintendo over approving Hot Tentacles Shooter for the Switch. The game is an anime arcade shooter where players rescue young women from tentacle monsters. Their bodies are covered up by tentacles, and you can unlock uncensored images of them once they’re freed from the monsters’ nefarious clutches.

Unfortunately, Nintendo told them that “obscene content” could “damage the brand” and “infringe its policies.” Since Hot Tentacles Shooter includes “boob nudity,” it was rejected during its Switch approval process. Kotaku reached out to Nintendo to ask about how long this policy has been in place, but did not receive a response by the time of publication.

Topless nudity has previously been allowed on the Nintendo Switch. The Witcher 3: The Wild Hunt features sex scenes where the women are fully topless, for instance. As of December 2021, players have confirmed that the breasts are fully uncensored on the Switch port. This has been a problem for players who don’t want their family members walking in. However, the European and the Japanese versions of the games appear to censor the sex scenes.

Gamuzumi intends to censor the game so that it can be published on the Nintendo Switch, but expressed disappointment that the policy will affect other adult games. Their other title Elves Christmas Hentai Puzzle had also been rejected, although the publisher has promised that Hot Tentacles Shooter will still be available on Steam.

[…]

Source: Nintendo Won’t Allow ‘Uncensored Boobs’ On The Switch Anymore

Yet another tech company making moral choices for the rest of the world. It’s like going back to the 1950s and tech companies are your parents claiming Rock and Roll is the Devil’s music. In the meantime those hypocrites had been banging and dancing to the Charleston in the 20s.

Posted in Sex

Cheekmate – build your own anal bead Chess  cheating device howto

Plastic capsule containing electronicsSocial media is abuzz lately over the prospect of cheating in tournament strategy games. Is it happening? How is that possible with officials watching? Could there be a hidden receiver somewhere? What can be done to rectify this? These are probing questions!

We’ll get to the bottom of this by making a simple one-way hidden communicator using Adafruit parts and the Adafruit IO service. Not for actual cheating of course, that would be asinine…in brief, a stain on the sport…but to record for posterity whether this sort of backdoor intrusion is even plausible or just an internet myth.

[…]

Source: Overview | Cheekmate – a Wireless Haptic Communication System | Adafruit Learning System

Book Publishing Giant Wiley Pulls Nearly 1400 Ebook Titles From GW Library Forcing Students To Buy Them Instead

[…]

George Washington University libraries have put out an alert to students and faculty that Wiley, one of the largest textbook publishers, has now removed 1,379 textbook titles that the library can lend out. They won’t even let the library purchase a license to lend out the ebooks. They will only let students buy the books.

Wiley will no longer offer electronic versions of these titles in the academic library market for license or purchase. To gain access to these titles, students will have to purchase access from vendors that license electronic textbooks directly to students, such as VitalSource, or purchase print copies. At most, GW Libraries can acquire physical copies for course reserve, which severely reduces the previous level of access for all students in a course.

This situation highlights how the behavior of large commercial publishers poses a serious obstacle to textbook affordability. In this case, Wiley seems to have targeted for removal those titles in a shared subscription package that received high usage. By withdrawing those electronic editions from the academic library market altogether, Wiley has effectively ensured that, when those titles are selected as course textbooks, students will bear the financial burden, and that libraries cannot adequately provide for the needs of students and faculty by providing shared electronic access. 

For years now, we’ve noted that if libraries didn’t already exist, you know that the publishers would scream loudly that they were piracy, and almost certainly block libraries from coming into existence. Of course, since we first noted that, the publishers seem to think they can and should just kill off libraries. They’ve repeatedly jacked up the prices on ebooks for libraries, making them significantly more expensive to libraries than print books, and putting ridiculous limitations on them. That is, when they even allow them to be lent out at all.

They’ve also sued the Internet Archive for daring to lend out ebooks of books that the Archive had in its possession.

And now they’re pulling stunts like this with academic libraries?

And, really, this is yet another weaponization of copyright. If it wasn’t an ebook, the libraries could just purchase copies of the physical book on the open market, and then lend it out. That’s what the first sale right enables. But the legacy copyright players made sure that the first sale right did not exist in the digital space, and now we get situations like this, where they get to dictate the terms over whether or not a library (an academic one at that) can even lend out a book.

This is disgusting behavior and people should call out Wiley for its decision here.

Source: Book Publishing Giant Pulls Nearly 1400 Ebook Titles From GW Library; Forcing Students To Buy Them Instead | Techdirt