A dark web carding market named ‘BidenCash’ has released a massive dump of 1,221,551 credit cards to promote their marketplace, allowing anyone to download them for free to conduct financial fraud.

Carding is the trafficking and use of credit cards stolen through point-of-sale malwaremagecart attacks on websites, or information-stealing malware.

BidenCash is a stolen cards marketplace launched in June 2022, leaking a few thousand cards as a promotional move.

Now, the market’s operators decided to promote the site with a much more massive dump in the same fashion that the similar platform ‘All World Cards’ did in August 2021.

[…]

The freely circulating file contains a mix of “fresh” cards expiring between 2023 and 2026 from around the world, but most entries appear to be from the United States.

Heatmap reflecting the global exposure, and focus in U.S.
Heatmap reflecting the global exposure, and focus in the U.S. (Cyble)

The dump of 1.2 million credit cards includes the following credit card and associated personal information:

  • Card number
  • Expiration date
  • CVV number
  • Holder’s name
  • Bank name
  • Card type, status, and class
  • Holder’s address, state, and ZIP
  • Email address
  • SSN
  • Phone number

Not all the above details are available for all 1.2 million records, but most entries seen by BleepingComputer contain over 70% of the data types.

The “special event” offer was first spotted Friday by Italian security researchers at D3Lab, who monitors carding sites on the dark web.

d3labs-tweet

The analysts claim these cards mainly come from web skimmers, which are malicious scripts injected into checkout pages of hacked e-commerce sites that steal submitted credit card and customer information.

[…]

BleepingComputer has discussed the authenticity with analysts at D3Lab, who confirmed that the data is real with several Italian banks, so the leaked entries correspond to real cards and cardholders.

However, many of the entries were recycled from previous collections, like the one  ‘All World Cards’ gave away for free last year.

From the data D3Labs has examined so far, about 30% appear to be fresh, so if this applies roughly to the entire dump, at least 350,000 cards would still be valid.

Of the Italian cards, roughly 50% have already been blocked due to the issuing banks having detected fraudulent activity, which means that the actually usable entries in the leaked collection may be as low as 10%.

[…]

Source: Darkweb market BidenCash gives away 1.2 million credit cards for free – Bleeping Computer

IKEA TRÅDFRI smart lighting hacked to blink and reset

Researchers at the Synopsys Cybersecurity Research Center (CyRC) have discovered an availability vulnerability in the IKEA TRÅDFRI smart lighting system. An attacker sending a single malformed IEEE 802.15.4 (Zigbee) frame makes the TRÅDFRI bulb blink, and if they replay (i.e. resend) the same frame multiple times, the bulb performs a factory reset. This causes the bulb to lose configuration information about the Zigbee network and current brightness level. After this attack, all lights are on with full brightness, and a user cannot control the bulbs with either the IKEA Home Smart app or the TRÅDFRI remote control.

The malformed Zigbee frame is an unauthenticated broadcast message, which means all vulnerable devices within radio range are affected.

To recover from this attack, a user could add each bulb manually back to the network. However, an attacker could reproduce the attack at any time.

CVE-2022-39064 is related to another vulnerability, CVE-2022-39065, which also affects availability in the IKEA TRÅDFRI smart lighting system. Read our latest blog post to learn more.

Source: CyRC Vulnerability Advisory: CVE-2022-39064 IKEA TRÅDFRI smart lighting | Synopsys

AI’s Recommendations Can Shape Your Preferences

Many of the things we watch, read, and buy enter our awareness through recommender systems on sites including YouTube, Twitter, and Amazon.

[…]

Recommender systems might not only tailor to our most regrettable preferences, but actually shape what we like, making preferences even more regrettable. New research suggests a way to measure—and reduce—such manipulation.

[…]

One form of machine learning, called reinforcement learning (RL), allows AI to play the long game, making predictions several steps ahead.

[…]

The researchers first showed how easily reinforcement learning can shift preferences. The first step is for the recommender to build a model of human preferences by observing human behavior. For this, they trained a neural network, an algorithm inspired by the brain’s architecture. For the purposes of the study, they had the network model a single simulated user whose actual preferences they knew so they could more easily judge the model’s accuracy. It watched the dummy human make 10 sequential choices, each among 10 options. It watched 1,000 versions of this sequence and learned from each of them. After training, it could successfully predict what a user would choose given a set of past choices.

Next, they tested whether a recommender system, having modeled a user, could shift the user’s preferences. In their simplified scenario, preferences lie along a one-dimensional spectrum. The spectrum could represent political leaning or dogs versus cats or anything else. In the study, a person’s preference was not a simple point on that line—say, always clicking on stories that are 54 percent liberal. Instead, it was a distribution indicating likelihood of choosing things in various regions of the spectrum. The researchers designated two locations on the spectrum most desirable for the recommender; perhaps people who like to click on those types of things will learn to like them even more and keep clicking.

The goal of the recommender was to maximize long-term engagement. Here, engagement for a given slate of options was measured roughly by how closely it aligned with the user’s preference distribution at that time. Long-term engagement was a sum of engagement across the 10 sequential slates. A recommender that thinks ahead would not myopically maximize engagement for each slate independently but instead maximize long-term engagement. As a potential side-effect, it might sacrifice a bit of engagement on early slates to nudge users toward being more satisfiable in later rounds. The user and algorithm would learn from each other. The researchers trained a neural network to maximize long-term engagement. At the end of 10-slate sequences, they reinforced some of its tunable parameters when it had done well. And they found that this RL-based system indeed generated more engagement than did one that was trained myopically.

The researchers then explicitly measured preference shifts […]

The researchers compared the RL recommender with a baseline system that presented options randomly. As expected, the RL recommender led to users whose preferences where much more concentrated at the two incentivized locations on the spectrum. In practice, measuring the difference between two sets of concentrations in this way could provide one rough metric for evaluating a recommender system’s level of manipulation.

Finally, the researchers sought to counter the AI recommender’s more manipulative influences. Instead of rewarding their system just for maximizing long-term engagement, they also rewarded it for minimizing the difference between user preferences resulting from that algorithm and what the preferences would be if recommendations were random. They rewarded it, in other words, for being something closer to a roll of the dice. The researchers found that this training method made the system much less manipulative than the myopic one, while only slightly reducing engagement.

According to Rebecca Gorman, the CEO of Aligned AI—a company aiming to make algorithms more ethical—RL-based recommenders can be dangerous. Posting conspiracy theories, for instance, might prod greater interest in such conspiracies. “If you’re training an algorithm to get a person to engage with it as much as possible, these conspiracy theories can look like treasure chests,” she says. She also knows of people who have seemingly been caught in traps of content on self-harm or on terminal diseases in children. “The problem is that these algorithms don’t know what they’re recommending,” she says. Other researchers have raised the specter of manipulative robo-advisors in financial services.

[…]

It’s not clear whether companies are actually using RL in recommender systems. Google researchers have published papers on the use of RL in “live experiments on YouTube,” leading to “greater engagement,” and Facebook researchers have published on their “applied reinforcement learning platform,“ but Google (which owns YouTube), Meta (which owns Facebook), and those papers’ authors did not reply to my emails on the topic of recommender systems.

[…]

Source: Can AI’s Recommendations Be Less Insidious? – IEEE Spectrum