Shein Owner Fined $1.9 Million For Failing To Notify 39 Million Users of Data Breach – Slashdot

Zoetop, the firm that owns Shein and its sister brand Romwe, has been fined (PDF) $1.9 million by New York for failing to properly disclose a data breach from 2018.

TechCrunch reports: A cybersecurity attack that originated in 2018 resulted in the theft of 39 million Shein account credentials, including those of more than 375,000 New York residents, according to the AG’s announcement. An investigation by the AG’s office found that Zoetop only contacted “a fraction” of the 39 million compromised accounts, and for the vast majority of the users impacted, the firm failed to even alert them that their login credentials had been stolen. The AG’s office also concluded that Zoetop’s public statements about the data breach were misleading. In one instance, the firm falsely stated that only 6.42 million consumers had been impacted and that it was in the process of informing all the impacted users.

https://m.slashdot.org/story/405939

Scientists grow human brain cells to play Pong

Researchers have succeeded in growing brain cells in a lab and hooking them up to electronic connectors proving they can learn to play the seminal console game Pong.

Led by Brett Kagan, chief scientific officer at Cortical Labs, the researchers showed that by integrating neurons into digital systems they could harness “the inherent adaptive computation of neurons in a structured environment”.

According to the paper published in the journal Neuron, the biological neural networks grown from human or rodent origins were integrated with computing hardware via a high-density multielectrode array.

“Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game Pong.

“Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions,” the paper said. “Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.”

[…]

Researchers have succeeded in growing brain cells in a lab and hooking them up to electronic connectors proving they can learn to play the seminal console game Pong.

Led by Brett Kagan, chief scientific officer at Cortical Labs, the researchers showed that by integrating neurons into digital systems they could harness “the inherent adaptive computation of neurons in a structured environment”.

According to the paper published in the journal Neuron, the biological neural networks grown from human or rodent origins were integrated with computing hardware via a high-density multielectrode array.

“Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game Pong.

“Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions,” the paper said. “Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.”

[…]

https://www.theregister.com/2022/10/14/boffins_grow_human_brain_cells/

Meta’s New $1499 Headset Will Track Your Eyes for Targeted Ads

Earlier this week, Meta revealed the Meta Quest Pro, the company’s most premium virtual reality headset to date with a new processor and screen, dramatically redesigned body and controllers, and inward-facing cameras for eye and face tracking. “To celebrate the $1,500 headset, Meta made some fun new additions to its privacy policy, including one titled ‘Eye Tracking Privacy Notice,'” reports Gizmodo. “The company says it will use eye-tracking data to ‘help Meta personalize your experiences and improve Meta Quest.’ The policy doesn’t literally say the company will use the data for marketing, but ‘personalizing your experience’ is typical privacy-policy speak for targeted ads.”

From the report: Eye tracking data could be used “in order to understand whether people engage with an advertisement or not,” said Meta’s head of global affair Nick Clegg in an interview with the Financial Times. Whether you’re resigned to targeted ads or not, this technology takes data collection to a place we’ve never seen. The Quest Pro isn’t just going to inform Meta about what you say you’re interested in, tracking your eyes and face will give the company unprecedented insight about your emotions. “We know that this kind of information can be used to determine what people are feeling, especially emotions like happiness or anxiety,” said Ray Walsh, a digital privacy researcher at ProPrivacy. “When you can literally see a person look at an ad for a watch, glance for ten seconds, smile, and ponder whether they can afford it, that’s providing more information than ever before.”

[…]

https://m.slashdot.org/story/405885

AI recruitment software is ‘automated pseudoscience’ says Cambridge study

Claims that AI-powered recruitment software can boost diversity of new hires at a workplace were debunked in a study published this week.

Advocates of machine learning algorithms trained to analyze body language and predict the emotional intelligence of candidates believe the software provides a fairer way to assess workers if it doesn’t consider gender and race. They argue the new tools could remove human biases and help companies meet their diversity, equity, and inclusion goals by hiring more people from underrepresented groups.

But a paper published in the journal Philosophy and Technology by a pair of researchers at the University of Cambridge, however, demonstrates that the software is little more than “automated pseudoscience”. Six computer science undergraduates replicated a commercial model used in industry to examine how AI recruitment software predicts people’s personalities using images of their faces. 

Dubbed the “Personality Machine”, the system looks for the “big five” personality tropes: extroversion, agreeableness, openness, conscientiousness, and neuroticism. They found the software’s predictions were affected by changes in people’s facial expressions, lighting and backgrounds, as well as their choice of clothing. These features have nothing to do with a jobseeker’s abilities, thus using AI for recruitment purposes is flawed, the researchers argue. 

“The fact that changes to light and saturation and contrast affect your personality score is proof of this,” Kerry Mackereth, a postdoctoral research associate at the University of Cambridge’s Centre for Gender Studies, told The Register. The paper’s results are backed up by previous studies, which have shown how wearing glasses and a headscarf in a video interview or adding in a bookshelf in the background can decrease a candidate’s scores for conscientiousness and neuroticism, she noted. 

Mackereth also explained these tools are likely trained to look for attributes associated with previous successful candidates, and are, therefore, more likely to recruit similar-looking people instead of promoting diversity. 

“Machine learning models are understood as predictive; however, since they are trained on past data, they are re-iterating decisions made in the past, not the future. As the tools learn from this pre-existing data set a feedback loop is created between what the companies perceive to be an ideal employee and the criteria used by automated recruitment tools to select candidates,” she said.

The researchers believe the technology needs to be regulated more strictly. “We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Eleanor Drage, a postdoctoral research associate also at the Centre for Gender Studies. 

“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested. As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer,” she added.

Mackereth said that although the European Union AI Act classifies such recruitment software as “high risk,” it’s unclear what rules are being enforced to reduce those risks. “We think that there needs to be much more serious scrutiny of these tools and the marketing claims which are made about these products, and that the regulation of AI-powered HR tools should play a much more prominent role in the AI policy agenda.”

“While the harms of AI-powered hiring tools appear to be far more latent and insidious than more high-profile instances of algorithmic discrimination, they possess the potential to have long-lasting effects on employment and socioeconomic mobility,” she concluded. ®

https://www.theregister.com/2022/10/13/ai_recruitment_software_diversity/

Android Leaks Some Traffic Even When ‘Always-On VPN’ Is Enabled – Slashdot

Mullvad VPN has discovered that Android leaks traffic every time the device connects to a WiFi network, even if the “Block connections without VPN,” or “Always-on VPN,” features is enabled. BleepingComputer reports: The data being leaked outside VPN tunnels includes source IP addresses, DNS lookups, HTTPS traffic, and likely also NTP traffic. This behavior is built into the Android operating system and is a design choice. However, Android users likely didn’t know this until now due to the inaccurate description of the “VPN Lockdown” features in Android’s documentation. Mullvad discovered the issue during a security audit that hasn’t been published yet, issuing a warning yesterday to raise awareness on the matter and apply additional pressure on Google.

Android offers a setting under “Network & Internet” to block network connections unless you’re using a VPN. This feature is designed to prevent accidental leaks of the user’s actual IP address if the VPN connection is interrupted or drops suddenly. Unfortunately, this feature is undercut by the need to accommodate special cases like identifying captive portals (like hotel WiFi) that must be checked before the user can log in or when using split-tunnel features. This is why Android is configured to leak some data upon connecting to a new WiFi network, regardless of whether you enabled the “Block connections without VPN” setting.

Mullvad reported the issue to Google, requesting the addition of an option to disable connectivity checks. “This is a feature request for adding the option to disable connectivity checks while “Block connections without VPN” (from now on lockdown) is enabled for a VPN app,” explains Mullvad in a feature request on Google’s Issue Tracker. “This option should be added as the current VPN lockdown behavior is to leaks connectivity check traffic (see this issue for incorrect documentation) which is not expected and might impact user privacy.” In response to Mullvad’s request, a Google engineer said this is the intended functionality and that it would not be fixed for the following reasons:

– Many VPNs actually rely on the results of these connectivity checks to function,
– The checks are neither the only nor the riskiest exemptions from VPN connections,
– The privacy impact is minimal, if not insignificant, because the leaked information is already available from the L2 connection.

Mullvad countered these points and the case remains open.

https://m.slashdot.org/story/405837

Google Starts Testing Holographic Video Chats at Real Offices

https://www.cnet.com/tech/computing/google-starts-testing-holographic-video-chats-at-real-offices/

Project Starline, a holographic chat booth

Google’s Project Starline, a holographic chat booth being installed in some early-access test offices this year.

Google

Project Starline, Google’s experimental technology using holographic light field displays to video chat with distant co-workers, is moving out of Google’s offices and into some real corporate locations for testing starting this year.

Google’s Project Starline tech, announced last year at the company’s I/O developer conference, uses giant light field displays and an array of cameras to record and display 3D video between two people at two different remote locations. 

Starline prototypes are being installed at Salesforce, WeWork, T-Mobile and Hackensack Meridian Health offices as part of the early-access program, with each company that’s part of the program getting two units to test for start. 

Google’s Project Starline makes it seem like you’re talking to someone in real life through a window, instead of through video chat.  Google

According to Google, 100 businesses have already demoed Project Starline at the company’s own offices. The off-Google installations are a next step to test how the holographic video chats could be used to create more realistic virtual meetings, without needing to use VR or AR headsets.

This tech won’t be anything that regular customers will be seeing: it’s being installed for corporate use only and only in a few test sites for now. But, it’s technology that Google believes could help remote communications with customers, creating a more immediate sense of presence than standard video chats.

A dark web carding market named ‘BidenCash’ has released a massive dump of 1,221,551 credit cards to promote their marketplace, allowing anyone to download them for free to conduct financial fraud.

Carding is the trafficking and use of credit cards stolen through point-of-sale malwaremagecart attacks on websites, or information-stealing malware.

BidenCash is a stolen cards marketplace launched in June 2022, leaking a few thousand cards as a promotional move.

Now, the market’s operators decided to promote the site with a much more massive dump in the same fashion that the similar platform ‘All World Cards’ did in August 2021.

[…]

The freely circulating file contains a mix of “fresh” cards expiring between 2023 and 2026 from around the world, but most entries appear to be from the United States.

Heatmap reflecting the global exposure, and focus in U.S.
Heatmap reflecting the global exposure, and focus in the U.S. (Cyble)

The dump of 1.2 million credit cards includes the following credit card and associated personal information:

  • Card number
  • Expiration date
  • CVV number
  • Holder’s name
  • Bank name
  • Card type, status, and class
  • Holder’s address, state, and ZIP
  • Email address
  • SSN
  • Phone number

Not all the above details are available for all 1.2 million records, but most entries seen by BleepingComputer contain over 70% of the data types.

The “special event” offer was first spotted Friday by Italian security researchers at D3Lab, who monitors carding sites on the dark web.

d3labs-tweet

The analysts claim these cards mainly come from web skimmers, which are malicious scripts injected into checkout pages of hacked e-commerce sites that steal submitted credit card and customer information.

[…]

BleepingComputer has discussed the authenticity with analysts at D3Lab, who confirmed that the data is real with several Italian banks, so the leaked entries correspond to real cards and cardholders.

However, many of the entries were recycled from previous collections, like the one  ‘All World Cards’ gave away for free last year.

From the data D3Labs has examined so far, about 30% appear to be fresh, so if this applies roughly to the entire dump, at least 350,000 cards would still be valid.

Of the Italian cards, roughly 50% have already been blocked due to the issuing banks having detected fraudulent activity, which means that the actually usable entries in the leaked collection may be as low as 10%.

[…]

Source: Darkweb market BidenCash gives away 1.2 million credit cards for free – Bleeping Computer

IKEA TRÅDFRI smart lighting hacked to blink and reset

Researchers at the Synopsys Cybersecurity Research Center (CyRC) have discovered an availability vulnerability in the IKEA TRÅDFRI smart lighting system. An attacker sending a single malformed IEEE 802.15.4 (Zigbee) frame makes the TRÅDFRI bulb blink, and if they replay (i.e. resend) the same frame multiple times, the bulb performs a factory reset. This causes the bulb to lose configuration information about the Zigbee network and current brightness level. After this attack, all lights are on with full brightness, and a user cannot control the bulbs with either the IKEA Home Smart app or the TRÅDFRI remote control.

The malformed Zigbee frame is an unauthenticated broadcast message, which means all vulnerable devices within radio range are affected.

To recover from this attack, a user could add each bulb manually back to the network. However, an attacker could reproduce the attack at any time.

CVE-2022-39064 is related to another vulnerability, CVE-2022-39065, which also affects availability in the IKEA TRÅDFRI smart lighting system. Read our latest blog post to learn more.

Source: CyRC Vulnerability Advisory: CVE-2022-39064 IKEA TRÅDFRI smart lighting | Synopsys

AI’s Recommendations Can Shape Your Preferences

Many of the things we watch, read, and buy enter our awareness through recommender systems on sites including YouTube, Twitter, and Amazon.

[…]

Recommender systems might not only tailor to our most regrettable preferences, but actually shape what we like, making preferences even more regrettable. New research suggests a way to measure—and reduce—such manipulation.

[…]

One form of machine learning, called reinforcement learning (RL), allows AI to play the long game, making predictions several steps ahead.

[…]

The researchers first showed how easily reinforcement learning can shift preferences. The first step is for the recommender to build a model of human preferences by observing human behavior. For this, they trained a neural network, an algorithm inspired by the brain’s architecture. For the purposes of the study, they had the network model a single simulated user whose actual preferences they knew so they could more easily judge the model’s accuracy. It watched the dummy human make 10 sequential choices, each among 10 options. It watched 1,000 versions of this sequence and learned from each of them. After training, it could successfully predict what a user would choose given a set of past choices.

Next, they tested whether a recommender system, having modeled a user, could shift the user’s preferences. In their simplified scenario, preferences lie along a one-dimensional spectrum. The spectrum could represent political leaning or dogs versus cats or anything else. In the study, a person’s preference was not a simple point on that line—say, always clicking on stories that are 54 percent liberal. Instead, it was a distribution indicating likelihood of choosing things in various regions of the spectrum. The researchers designated two locations on the spectrum most desirable for the recommender; perhaps people who like to click on those types of things will learn to like them even more and keep clicking.

The goal of the recommender was to maximize long-term engagement. Here, engagement for a given slate of options was measured roughly by how closely it aligned with the user’s preference distribution at that time. Long-term engagement was a sum of engagement across the 10 sequential slates. A recommender that thinks ahead would not myopically maximize engagement for each slate independently but instead maximize long-term engagement. As a potential side-effect, it might sacrifice a bit of engagement on early slates to nudge users toward being more satisfiable in later rounds. The user and algorithm would learn from each other. The researchers trained a neural network to maximize long-term engagement. At the end of 10-slate sequences, they reinforced some of its tunable parameters when it had done well. And they found that this RL-based system indeed generated more engagement than did one that was trained myopically.

The researchers then explicitly measured preference shifts […]

The researchers compared the RL recommender with a baseline system that presented options randomly. As expected, the RL recommender led to users whose preferences where much more concentrated at the two incentivized locations on the spectrum. In practice, measuring the difference between two sets of concentrations in this way could provide one rough metric for evaluating a recommender system’s level of manipulation.

Finally, the researchers sought to counter the AI recommender’s more manipulative influences. Instead of rewarding their system just for maximizing long-term engagement, they also rewarded it for minimizing the difference between user preferences resulting from that algorithm and what the preferences would be if recommendations were random. They rewarded it, in other words, for being something closer to a roll of the dice. The researchers found that this training method made the system much less manipulative than the myopic one, while only slightly reducing engagement.

According to Rebecca Gorman, the CEO of Aligned AI—a company aiming to make algorithms more ethical—RL-based recommenders can be dangerous. Posting conspiracy theories, for instance, might prod greater interest in such conspiracies. “If you’re training an algorithm to get a person to engage with it as much as possible, these conspiracy theories can look like treasure chests,” she says. She also knows of people who have seemingly been caught in traps of content on self-harm or on terminal diseases in children. “The problem is that these algorithms don’t know what they’re recommending,” she says. Other researchers have raised the specter of manipulative robo-advisors in financial services.

[…]

It’s not clear whether companies are actually using RL in recommender systems. Google researchers have published papers on the use of RL in “live experiments on YouTube,” leading to “greater engagement,” and Facebook researchers have published on their “applied reinforcement learning platform,“ but Google (which owns YouTube), Meta (which owns Facebook), and those papers’ authors did not reply to my emails on the topic of recommender systems.

[…]

Source: Can AI’s Recommendations Be Less Insidious? – IEEE Spectrum

Protestors hack Iran state TV live on air

Iran state TV was apparently hacked Saturday, with its usual broadcast footage of muttering geriatric clerics replaced by a masked face followed by a picture of Supreme Leader Ali Khamenei with a target over his head, the sound of a gunshot, and chants of “Women, Life, Freedom!”

BBC News identifies the pirate broadcaster as Adalat Ali”, or Ali’s Justice, from social media links in the footage, which also included photographs of women killed in recent protests across the country.

Saturday’s TV news bulletin was interrupted at about 18:00 local time with images which included Iran’s supreme leader with a target on his head, photos of Ms Amini and three other women killed in recent protests. One of the captions read “join us and rise up”, whilst another said “our youths’ blood is dripping off your paws”. The interruption lasted only a few seconds before being cut off.

Source: Protestors hack Iran state TV live on air | Boing Boing

French appeals court slashes Apple’s paltry 1 week profit price fixing anti competition fine

Instead of a week of profits, mere days of net income for Cook

€1.1 billion fine levied against Apple by French authorities has been cut by two-thirds to just €372 million ($363 million) – an even more paltry sum for the world’s first company to surpass $3 trillion in market valuation.

The three-comma invoice was submitted to the iPhone giant in 2020 by France’s antitrust body, the Autorité de la Concurrence. Yesterday an appeals court reportedly tossed out the price-fixing charge in that legal spat as well as reducing the time scope of remaining charges and lowering the fine calculation rate.

The case goes back to 2012. Apple was accused of conspiring with Tech Data and Ingram Micro to fix the prices of some Apple devices (that’s the dropped charge) as well as abusing its power over resellers by limiting product supplies, thus pushing fans into Apple retail stores.

Tech Data and Ingram Micro were also fined, and have since had their totals reduced as well.

Both sides plan to appeal the decision, with Apple and the Autorité both telling Bloomberg they were unhappy with the outcome. In Apple’s case, it plans to file an appeal with France’s highest court to completely nullify the fine, a spokesperson said.

The Autorité, on the other hand, isn’t happy that the fine was reduced. “We would like to reaffirm our desire to guarantee the dissuasive nature of our penalties,” an Autorité spokesperson said, adding that desire especially applies to market players at the level of Apple.

[…]

Source: French appeals court slashes Apple’s €1.1b fine • The Register

Binance forced to briefly halt transactions following $100 million blockchain hack

Binance temporarily suspended fund transfers and other transactions on Thursday night after it discovered an exploit on its Smart Chain (BSC) blockchain network. Early reports said hackers stole cryptocurrency equivalent to more than $500 million, but Binance chief executive Changpeng Zhao said that the company estimates the breach’s impact to be between $100 million and $110 million. A total of $7M had already been frozen.

The cryptocurrency exchange also assured users on Reddit that their funds are safe. As Zhao explained, an exploit on the BSC Token Hub cross-chain bridge, which enables the transfer of cryptocurrency and digital assets like NFTs from one blockchain to another, “resulted in extra BNB” or Binance Coin. That could mean the bad actors minted new BNBs and then moved an equivalent of around $100 million off the blockchain instead of stealing people’s actual funds. According to Bleeping Computer, the hacker quickly spread the stolen cryptocurrency in attempts of converting it to other assets, but it’s unclear if they had succeeded.

Zhao said the issue has been contained. The Smart Chain network has also started running again — with fixes to stop hackers from getting in — so users might be able to resume their transactions soon. Cross-chain bridge hacks have become a top security risk recently, and this incident is but one of many. Blockchain analyst firm Chainalysis reported back in August that an estimated total of $2 billion in cryptocurrency was stolen across 13 cross-chain bridge hacks. Approximately 69 percent of that amount had been stolen this year alone.

Source: Binance forced to briefly halt transactions following $100 million blockchain hack | Engadget

Judge Ruling That YouTube Ripping Tool May Violate Copyright Law goes nuts on argumentation

There are a number of different tools out there that let you download YouTube videos. These tools are incredibly useful for a number of reasons and should be seen as obviously legal in the same manner that home video recording devices were declared legal by the Supreme Court, because they have substantial non-infringing uses. But, of course, we’re in the digital age, and everything that should be obviously settled law is up for grabs again, because “Internet.”

In this case, a company named Yout offered a service for downloading YouTube video and audio, and the RIAA (because, they’re the RIAA) couldn’t allow that to happen. Home taping is killing music, y’know. Rather than going directly after Yout, the RIAA sent angry letters to lots of different companies that Yout relied on to exist. It got Yout’s website delisted from Google, had its payment processor cut the company off, etc. Yout was annoyed by this and filed a lawsuit against the RIAA.

The crux of the lawsuit is “Hey, we don’t infringe on anything,” asking for declaratory judgment. But it also seeks to go after the RIAA for DMCA 512(f) (false takedown notices) abuse and defamation (for the claims it made in the takedown notices it sent). All of these were going to be a longshot, and so it probably isn’t a huge surprise that the ruling was a complete loser for Yout (first posted to TorrentFreak).

But, in reading through the ruling there are things to be concerned about, beyond just the ridiculousness of saying that a digital VCR isn’t protected in the same way that a physical one absolutely is.

In arguing for declaratory judgment of non-infringement, Yout argues that it’s not violating DMCA 1201 (the problematic anti-circumvention provisions) because YouTube doesn’t really employ any technological protection measures that Yout has to circumvent. The judge disagrees, basically saying that even though it’s easy to download videos from YouTube, it still takes steps and is not just a feature that YouTube provides.

The steps outlined constitute an extraordinary use of the YouTube platform, which is self-evident from the fact that the steps access downloadable files through a side door, the Developer Tools menu, and that users must obtain instructions hosted on non-YouTube platforms to explain how to access the file storage location and their files. As explained in the previous section, the ordinary YouTube player page provides no download button and appears to direct users to stream content. I reasonably infer, then, that an ordinary user is not accessing downloadable files in the ordinary course.

That alone is basically an attack on the nature of the open internet. There are tons of features that original websites don’t provide, but which can be easily added to any website via add-ons, extensions, or just a bit of simple programs. But, the judge here is basically saying that not providing a feature in the form of a button directly means that there’s a technological protection measure, and bypassing it could be seen as infringing.

Yikes!

Of course, part of DMCA 1201 is not just having a technological protection measure in place, but an effective one. Here, it seems like there’s an argument that it’s not a strong one. It is not at all a strong protection measure, because basically the only protection measure is “not including a download button.” But, the court sees it otherwise. Yout points out that YouTube makes basically no effort to block anyone from downloading videos, showing that it doesn’t encrypt the files, and the court responds that it doesn’t need to encrypt the files, because other technological protections exist, like passwords and validation keys. But, uh, YouTube doesn’t use either of those either. So the whole thing is weird.

As I have already explained, the definition of “circumvent a technological measure” in the DMCA indicates that scrambling and encryption are prima facie examples of technological measures, but it does not follow that scrambling and encryption constitute an exhaustive list. Courts in the Second Circuit and beyond have held that a wide range of technological measures not expressly incorporated in statute are “effective,” including password protection and validation keys.

So again, the impression we’re left with is the idea that if a website doesn’t directly expose a feature, any third party service that provides that feature may be circumventing a TPM and violating DMCA 1201? That can’t be the way the law works.

Here, the court then says (and I only wish I were kidding) that modifying a URL is bypassing a TPM. Let me repeat that: modifying a URL can be infringing circumvention under 1201. That’s… ridiculous.

Moreover, Yout’s technology clearly “bypasses” YouTube’s technological measures because it affirmatively acts to “modify[]” the Request URL (a.k.a. signature value), causing an end user to access content that is otherwise unavailable. … As explained, without modifying the signature value, there is no access to the allegedly freelyavailable downloadable files. Accordingly, I cannot agree with Yout that there is “nothing to circumvent.”

 

Then, as Professor Eric Goldman notes, the judge dismisses the 512(f) claims by saying that 512(f) doesn’t apply to DMCA 1201 claims. As you hopefully remember, 512(f) is the part of the DMCA that is supposed to punish copyright holders for sending false notices. In theory. In practice, courts have basically said that as long as the sender believes the notice is legit, it’s legit, and therefore there is basically never any punishment for sending false notices.

Saying that 512(f) only applies to 512 takedown notices, and not 1201 takedown notices is just yet another example of the inherent one-sidedness of the DMCA. For years, we’ve pointed out how ridiculous 1201 is, in which merely advertising tools that could be used to circumvent a technical protection measure is considered copyright infringement in and of itself — even if there’s no actual underlying infringement. Given how expansive 1201 is in favor of copyright holders, you’d think it only makes sense to say that bogus notices should face whatever tiny penalty might be available under 512(f), but the judge here says “nope.” As Goldman highlights, this will just encourage people to send takedowns where they don’t directly cite 512, knowing that it will protect them from 512(f) responses.

One other oddity that Goldman also highlights: most of the time if we’re thinking about 1201 circumvention, we’re talking about the copyright holder themselves getting upset that someone is routing around the technical barriers that they put up. But this case is different. YouTube created the technical barriers (I mean, it didn’t actually, but that’s what the court is saying it did), but YouTube is not a party to the lawsuit.

So… that raises a fairly disturbing question. Could the RIAA (or any copyright holder) sue someone for a 1201 violation for getting around someone else’s technical protection measures? Because… that would be weird. But parts of this decision suggest that it’s exactly what the judge envisions.

Yes, some may argue that this tool is somehow “bad” and shouldn’t be allowed. I disagree, but I understand where the argument comes from. But, even if you believe that, it seems like a ruling like this could still lead to all sorts of damage for various third party tools and services. The internet, and the World Wide Web were built to be module. It’s quite common for third party services to build tools and overlays and extensions and whatnot to add features to certain websites.

It seems crazy that this ruling seems to suggest that might violate copyright law.

Source: There Are All Sorts Of Problems With Ruling That YouTube Ripping Tool May Violate Copyright Law | Techdirt

The biggest problem is that if you don’t download the video to your device, you can’t actually watch it, so YouTube is designed to allow you to download the video.

Nintendo Won’t Allow ‘Uncensored Boobs’ On The Switch Anymore

It’s a sad time for titty lovers everywhere. Last week, the publisher of Hot Tentacles Shooter announced on Twitter that the game will no longer be available on the Nintendo Switch, because Nintendo no longer allows “uncensored boobs” on its consoles.

Originally spotted by Nintendo Everything, the publisher Gamuzumi had been in contact with Nintendo over approving Hot Tentacles Shooter for the Switch. The game is an anime arcade shooter where players rescue young women from tentacle monsters. Their bodies are covered up by tentacles, and you can unlock uncensored images of them once they’re freed from the monsters’ nefarious clutches.

Unfortunately, Nintendo told them that “obscene content” could “damage the brand” and “infringe its policies.” Since Hot Tentacles Shooter includes “boob nudity,” it was rejected during its Switch approval process. Kotaku reached out to Nintendo to ask about how long this policy has been in place, but did not receive a response by the time of publication.

Topless nudity has previously been allowed on the Nintendo Switch. The Witcher 3: The Wild Hunt features sex scenes where the women are fully topless, for instance. As of December 2021, players have confirmed that the breasts are fully uncensored on the Switch port. This has been a problem for players who don’t want their family members walking in. However, the European and the Japanese versions of the games appear to censor the sex scenes.

Gamuzumi intends to censor the game so that it can be published on the Nintendo Switch, but expressed disappointment that the policy will affect other adult games. Their other title Elves Christmas Hentai Puzzle had also been rejected, although the publisher has promised that Hot Tentacles Shooter will still be available on Steam.

[…]

Source: Nintendo Won’t Allow ‘Uncensored Boobs’ On The Switch Anymore

Yet another tech company making moral choices for the rest of the world. It’s like going back to the 1950s and tech companies are your parents claiming Rock and Roll is the Devil’s music. In the meantime those hypocrites had been banging and dancing to the Charleston in the 20s.

Posted in Sex

Cheekmate – build your own anal bead Chess  cheating device howto

Plastic capsule containing electronicsSocial media is abuzz lately over the prospect of cheating in tournament strategy games. Is it happening? How is that possible with officials watching? Could there be a hidden receiver somewhere? What can be done to rectify this? These are probing questions!

We’ll get to the bottom of this by making a simple one-way hidden communicator using Adafruit parts and the Adafruit IO service. Not for actual cheating of course, that would be asinine…in brief, a stain on the sport…but to record for posterity whether this sort of backdoor intrusion is even plausible or just an internet myth.

[…]

Source: Overview | Cheekmate – a Wireless Haptic Communication System | Adafruit Learning System

Book Publishing Giant Wiley Pulls Nearly 1400 Ebook Titles From GW Library Forcing Students To Buy Them Instead

[…]

George Washington University libraries have put out an alert to students and faculty that Wiley, one of the largest textbook publishers, has now removed 1,379 textbook titles that the library can lend out. They won’t even let the library purchase a license to lend out the ebooks. They will only let students buy the books.

Wiley will no longer offer electronic versions of these titles in the academic library market for license or purchase. To gain access to these titles, students will have to purchase access from vendors that license electronic textbooks directly to students, such as VitalSource, or purchase print copies. At most, GW Libraries can acquire physical copies for course reserve, which severely reduces the previous level of access for all students in a course.

This situation highlights how the behavior of large commercial publishers poses a serious obstacle to textbook affordability. In this case, Wiley seems to have targeted for removal those titles in a shared subscription package that received high usage. By withdrawing those electronic editions from the academic library market altogether, Wiley has effectively ensured that, when those titles are selected as course textbooks, students will bear the financial burden, and that libraries cannot adequately provide for the needs of students and faculty by providing shared electronic access. 

For years now, we’ve noted that if libraries didn’t already exist, you know that the publishers would scream loudly that they were piracy, and almost certainly block libraries from coming into existence. Of course, since we first noted that, the publishers seem to think they can and should just kill off libraries. They’ve repeatedly jacked up the prices on ebooks for libraries, making them significantly more expensive to libraries than print books, and putting ridiculous limitations on them. That is, when they even allow them to be lent out at all.

They’ve also sued the Internet Archive for daring to lend out ebooks of books that the Archive had in its possession.

And now they’re pulling stunts like this with academic libraries?

And, really, this is yet another weaponization of copyright. If it wasn’t an ebook, the libraries could just purchase copies of the physical book on the open market, and then lend it out. That’s what the first sale right enables. But the legacy copyright players made sure that the first sale right did not exist in the digital space, and now we get situations like this, where they get to dictate the terms over whether or not a library (an academic one at that) can even lend out a book.

This is disgusting behavior and people should call out Wiley for its decision here.

Source: Book Publishing Giant Pulls Nearly 1400 Ebook Titles From GW Library; Forcing Students To Buy Them Instead | Techdirt

A Methodology for Quantifying the Value of Cybersecurity Investments in the Navy

RAND Corporation researchers developed and supported the implementation of a methodology to assess the value of resource options for U.S. Navy cybersecurity investments. The proposed methodology features 12 scales in two categories (impact and exploitability) that allow the Navy to score potential cybersecurity investments in the Program Objective Memorandum (POM) process. The authors include a test implementation using publicly available historical U.S. Navy data to demonstrate how the methodology facilitates valuable comparisons of potential cybersecurity investments.

When compared with existing methods used by the Navy, this methodology could improve the consistency of ratings and provide a more defined structure for thinking through the risk reduction and prioritization of different investments.

[…]

A major advantage of this methodology is its simplicity

  • No complex modeling is required. The risk matrixes align with U.S. Department of Defense processes, making the methodology more approachable for analysts. The level of effort required is further reduced by the need to assess only the risk factors that are relevant to an investment.

Information security economic approaches are not directly applicable to the Navy context

  • Existing models have multiple issues that make it very challenging to apply them in the context of the Navy—not the least of which is their dependency on the monetization of loss. Ultimately, the lack of information that the Navy has at its fingertips regarding the cybersecurity state of systems and the potential impact of future and ongoing investments is a key limiting factor.
  • Although complex models offer greater potential for precision and accuracy, it comes at the expense of computational, data, and understandability needs, which are a key challenge area for the Navy.

[…]

Source: A Methodology for Quantifying the Value of Cybersecurity Investments in the Navy | RAND

This is a risk assessment methodology which is specific to the domain the navy works in, which is different from the domains of most commercial companies.

plant controls machete

plant machete

This installation enables a live plant to control a machete. plant machete has a control system that reads and utilizes the electrical noises found in a live philodendron. The system uses an open source micro-controller connected to the plant to read varying resistance signals across the plant’s leaves. Using custom software, these signals are mapped in real-time to the movements of the joints of the industrial robot holding a machete. In this way, the movements of the machete are determined based on input from the plant. Essentially the plant is the brain of the robot controlling the machete determining how it swings, jabs, slices and interacts in space.

Source: plant machete — David Bowen

Why Reddit Is Losing It Over Samsung’s New Privacy Policy – it’s an incredible data grab

Samsung recently updated it privacy policy for all users with a Samsung account, effective Oct. 1. One Redditor read the policy, did not like what they saw, and shared it to r/android, highlighting what they consider to be the doc’s worst policy points. The thread blew up, with Android users aplenty decrying Samsung’s new policy. But why is everyone so pissed off, and is any of it worth worrying about? Let’s explore.

Samsung’s privacy policy is a bit creepy

From the jump, the new policy doesn’t look good. In fact, it appears downright invasive. There are the standard data giveaways we’ve come to expect: When you create a Samsung account, you must give over personal information like your name, age, address, email address, gender, etc. Par for the course.

However, Samsung also notes it will collect data such as credit card information, usernames and passwords for third-party services, photos, contacts, text logs, recordings of your voice generated during voice commands, and location data, including precise location data as well as nearby wifi access points and cell towers. It might come as a surprise to know a company like Samsung can keep your chat transcripts, contacts, and voice recordings, but there’s precedent: Apple found itself in hot water when third-party contractors revealed they were able to listen in on audio recordings from Siri requests, which included all kinds of personal conversations and activities.

Samsung also tracks your general activity via cookies, pixels, web beacons, and other means. The company claims this tracking is done for a variety of reasons, including remembering your information to avoid you having to retype it in the future, and to better learn how you use their services. To achieve these goals, it collects just about everything there is to know about your device, including your IP address, device model, device settings, websites you visit, and apps you download, among many others. The policy does remind you to adjust your privacy settings if you’re uncomfortable with this default tracking (as if anyone wouldn’t be).

The company says it has a lot of uses for this information, including ad delivery, communication with customers, enhancing their services, improving their business, identifying and preventing fraud and criminal activity, and to comply with “applicable legal requirements.” Further, they reserve the right to share your information with “subsidiaries and affiliates,” “business partners and third-parties,” as well as law enforcement and other authorities. In short, depending on the circumstances, your Samsung data could end up in the hands of a lot of third parties.

But that’s not everything. Under the “Notice to California Residents” section is where the juiciest policies emerge. While most of the info is the same, if broken down in a different way, there is one additional note about data Samsung collects: biometric information. The company doesn’t elaborate, but this entry implies Samsung obtains data from face and fingerprint scans, when traditionally, this information is stored on-device. Apple, for example, doesn’t have access to your face scans on your iPhone. Obviously, this is potentially concerning.

In addition, the California Residents section also discusses what data Samsung sells to third parties. Samsung says in the 12 months before this new policy went into effect, it may have sold data of yours, including device identifiers (cookies, pixel tags, etc.), purchase histories or tendencies, and network activity, including how you interact with websites.

[…]

If you’re eyeing your Galaxy Z Flip with newfound skepticism, I don’t blame you. Unfortunately, if you dive into the privacy policies for most of your other tech, you’ll be similarly disturbed. Samsung is hardly the only collecting, sharing, and selling your data.

One Redditor does make a great point about the redundancy of privacy violations here. Sure, Google might have similar policies in place, but since Samsung runs Android, you’re really dealing with two meddling companies instead, not one:

Considering the prices for their hardware, the un-removable bloatware that is generally inferior to the Google software, and anti-Right-to-Repair campaigns (and reflections in their hardware), I see no reason to buy their phones over Google’s. I’ll have just one company with intrusive insight into my personal device at a time, thank you.

[…]

Source: Why Reddit Is Losing It Over Samsung’s New Privacy Policy

The Onion defends right to parody in very real supreme court brief supporting local satirist vs Police who were made fun of

The Onion, the long-running satirical publication, has filed a very real legal document with the US supreme court, urging it to take on a case centered on the right to parody. And in order to make a serious legal point, the filing does what the Onion does best, offering a big helping of total nonsense.

Claiming global Onion readership of 4.3 trillion, the filing describes the publication as “the single most powerful and influential organization in human history”. It’s the source of 350,000 jobs at its offices and “manual labor camps”, and it “owns and operates the majority of the world’s transoceanic shipping lanes, stands on the nation’s leading edge on matters of deforestation and strip mining, and proudly conducts tests on millions of animals daily”.

With such power, why does the Onion feel the need to weigh in on a mundane court case? “To protect its continued ability to create fiction that may ultimately merge into reality,” the filing asserts. “The Onion’s writers also have a self-serving interest in preventing political authorities from imprisoning humorists. This brief is submitted in the interest of at least mitigating their future punishment.”

The outlet is concerned about the outcome of a case it describes in a headline: “Ohio Police Officers Arrest, Prosecute Man Who Made Fun of Them on Facebook”. It sounds like an Onion headline, the filing points out, but it’s not.

A screenshot of the Onion website shows several different stories all with the same headline: 'No way to prevent this' says only nation where this regularly happens.
‘No way to prevent this’: why the Onion’s gun violence headline is so devastating
Read more

In 2016, Anthony Novak was arrested for making a Facebook page that parodied the local police page. He was charged with disrupting a public service but was acquitted. The next year, he sued the department, arguing it was retaliating against him for using his right to free speech, as Cleveland.com reported.

In May, a US appeals court backed the police in the case, a finding Novak’s lawyer said “sets dangerous precedent undermining free speech”. Last week, Novak appealed against the case to the supreme court, leading to the Onion’s filing – what’s known as an amicus brief, a filing by an outside party seeking to influence the court.

In one of its less amusing sections, the brief argues that the appeals court ruling “imperils an ancient form of discourse. The court’s decision suggests that parodists are in the clear only if they pop the balloon in advance by warning their audience that their parody is not true. But some forms of comedy don’t work unless the comedian is able to tell the joke with a straight face.”

The filing highlights the history of parody and its social function: “It adopts a particular form in order to critique it from within”. To demonstrate, the Onion cites one of its own greatest headlines: “Supreme court rules supreme court rules”.

The document serves as a rare glimpse behind the comedy curtain – an explanation of how jokes work – even as it serves as a more traditional legal document, pointing to relevant court cases and using words like “dispositive”.

The city of Parma has until 28 October to provide a response in a case that would be heard next year if the high court opts to consider it.

In the meantime, “the Onion cannot stand idly by in the face of a ruling that threatens to disembowel a form of rhetoric that has existed for millennia, that is particularly potent in the realm of political debate, and that, purely incidentally, forms the basis of The Onion’s writers’ paychecks”.

Source: The Onion defends right to parody in very real supreme court brief supporting local satirist | US supreme court | The Guardian

Publishers Lose Their Shit After Authors Push Back On Their Attack On Libraries, start fake newsing

On Friday, we wrote about hundreds of authors signing a letter calling out the big publishers’ attacks on libraries (in many, many different ways). The publishers pretend to represent the best interests of the authors, but history has shown over and over again that they do not. They represent themselves, and use the names of authors they exploit to claim the moral high ground they do not hold.

It’s no surprise, then, that the publishers absolutely fucking lost their shit after the letter came out. The Association of American Publishers put out a statement falsely claiming that the letter, put out by Fight for the Future (FftF), and signed by tons of authors from the super famous to the less well known, was actually “disinformation in the Internet Archive case.” And, look, if you’re at the point you’re blaming the Internet Archive for something another group actually did, you know you’ve lost, and you’re just lashing out.

Perhaps much more telling is that the Authors Guild actually put out an even more aggressive statement against Fight for the Future. Now, as best selling author Barry Eisler (who signed onto Fight for the Future’s letter) wrote write here on Techdirt years ago, it’s been clear for a while that the Authors Guild is not actually representing the best interests of authors. It has long been a front group for the publishers themselves.

The Authors Guild’s response to the FftF letter simply confirms this.

First, it claims that authors were misled into signing the letter by an earlier, different draft of the letter. This is simply false. The Authors Guild is making shit up because they just can’t believe that maybe authors actually support this.

They do name one author, Daniel Handler (aka Lemony Snicket), who had signed on, but removed his name before the letter was even published. But… I’m guessing the real reason that probably happened was that the publishers (who learned about the letter before it was published as proved by this email that was sent around prior to the release) FLIPPED OUT when they saw Handler’s name was on the letter. That’s because in their lawsuit against the Internet Archive’s open library project, they rely heavily on the claim that Lemony Snicket’s books are available there.

It seems reasonable to speculate that the publishers saw his name was on the letter, realized it undermined basically the crux of their case, and came down like a ton of bricks on him to pressure him into un-signing the letter. That story, at the very least, makes more sense than someone like Handler somehow being “tricked” into signing a letter that very clearly says what it says.

The Authors Guild’s other claims are equally sketchy.

The lawsuit against Open Library is completely unrelated to the traditional rights of libraries to own and preserve books. It is about Open Library’s attempt to stretch fair use to the breaking point – where any website that calls itself a library could scan books and make them publicly available – a practice engaged in by ebook pirates, not libraries.

This completely misrepresents what the Open Library does, and its direct parallel to any physical library, in that it buys a copy of a book and then can lend out that copy of the book. The courts have already established that scanning books is legal fair use — thanks to a series of cases the Authors Guild brought and lost (embarrassingly so) — and the Open Library then only allows a one-to-one lending of ebooks to actual books. It is functionally equivalent to any other library in any way.

And this is actually important, living at a time when these very same publishers are trying to use twisted interpretations of copyright law, to insist that they can limit how libraries buy and lend ebooks in ways that simply are not possible under the law with regular books.

Also, there’s this bit of nonsense:

The lawsuit is being brought only against IA’s Open Library; it will not impact in any way the Wayback Machine or any other services IA offers.

This is laughable. The lawsuit is asking for millions and millions of dollars from the Internet Archive. If it loses the case, there’s a very strong likelihood that the entire Internet Archive will need to shut down, because it will be unable to pay. Even if the Internet Archive could survive, the idea that this non-profit would be forced to fork over tens of millions of dollars wouldn’t have any impact on other parts of its offerings is laughable.

Fight for the Future has hit back at these accusations:

As expected, corporate publishing industry lobbyists have responded by attempting to undermine the demands of these authors by circulating false and condescending talking points, a frequent tactic lobbyists use to divert attention from the principled actions of activists.

The statement from the Authors Guild specifically asserts, without evidence, that “multiple authors” who signed this letter feel they were “misled”. This assertion is false and we challenge these lobbyists to either provide evidence for their claim or retract it. 

It’s repugnant for industry lobbying associations who claim to represent authors to dismiss the activism of author-signatories like Neil Gaiman, Chuck Wendig, Naomi Klein, Robert McNamee, Baratunde Thurston, Lawrence Lessig, Cory Doctorow, Annalee Newitz, and Douglas Rushkoff, or claim that these authors were somehow misled into signing a brief and clear letter issuing specific demands for the good of all libraries. Corporate publishing lobbyists are free to disagree with the views stated in our letter, but it’s unacceptable for them to make false claims about our organization or the authors who signed.

They also highlight how many authors who signed onto the letter talked about how proud they are that their books are available at the Internet Archive, which is not at all what you would expect if the Open Library was actually about “piracy.”

Author Elizabeth Kate Switaj said when signing: “My most recently published book is on the Internet Archive—and that delights me.”  Dan Gillmor said: “Big Publishing would outlaw public libraries if it could—or at least make it impossible for libraries to buy and lend books as they have traditionally done, to enormous public benefit—and its campaign against the Internet Archive is a step toward that goal.” Sasha Costanza-Cook called publisher’s actions against the Internet Archive “absolutely shameful” and Laura Gibbs said “it’s the library I use most, and I am proud to see my books there.”

They, also, rightly push back on the totally nonsense claims that FftF is “not independent” and is somehow a front for the Internet Archive. I know people at both organizations, and this assertion is laughable. The two organizations agree on many things, but are absolutely and totally independent. This is nothing but a smear from the Authors Guild which can’t even fathom that most authors don’t like the publishers or the way the Authors Guild has become an organization that doesn’t look out for the best interests of all authors, but rather just a few of the biggest names.

Source: Publishers Lose Their Shit After Authors Push Back On Their Attack On Libraries | Techdirt

EA Announces New Anti-Cheat Tech That Operates At The Kernel Level ie takes over your PC, can read and write everything on it

It seems anti-cheat technology is the new DRM. By that I mean that, with the gaming industry diving headfirst into the competitive online gaming scene, the concern over piracy has shifted into a concern over cheating making those online games less attractive to gamers. And because the anti-cheat tech that companies are using is starting to make the gaming public every bit as itchy as it was over DRM.

Consider that Denuvo’s own anti-cheat tech has already started following its DRM path in getting ripped out of games shortly after release after one game got review-bombed over just how intrusive it was. And then consider that Valve had to reassure gamers that its own anti-cheat technology wasn’t watching user’s browsing habits, given that the VAC platform was designed to sniff out kernel-level cheats. One notable Reddit thread had gamers comparing Valve to Electronic Arts as a result.

Which makes it perhaps more interesting that EA recently announced new anti-cheat technology that, yup, operates at the kernel level.

The new kernel-level EA Anti-Cheat (EAAC) tools will roll out with the PC version of FIFA 23 this month, EA announced, and will eventually be added to all of its multiplayer games (including those with ranked online leaderboards). But strictly single-player titles “may implement other anti-cheat technology, such as user-mode protections, or even forgo leveraging anti-cheat technology altogether,” EA Senior Director of Game Security & Anti-Cheat Elise Murphy wrote in a Tuesday blog post.

Unlike anti-cheat methods operating in an OS’s normal “user mode,” kernel-level anti-cheat tools provide a low-level, system-wide view of how cheat tools might mess with a game’s memory or code from the outside. That allows anti-cheat developers to detect a wider variety of cheating threats, as Murphy explained in an extensive FAQ.

The concern from gamers came quickly. You have to keep in mind that none of this occurs without the context of history. There’s a reason why, even today, a good chunk of the gaming public knows all about the Sony rootkit fiasco. They’re aware of the claims that DRM like Denuvo’s affects PC performance. They’ve heard plenty of horror stories about gaming companies, or other software companies, coopting security tools like this in order to slurp up all kinds of PII or user activity for non-gaming purposes. Hell, one of the more prolific antivirus companies recently announced a plan to also use customer machines for crypto-mining.

So it’s in that context that hearing that EA would please like to access the most base-level and sensitive parts of a customer’s PC just to make sure that fewer people can cheat online in FIFA.

Privacy aside, some users might also worry that a new kernel-level driver could destabilize or hamper their system (à la Sony’s infamous music DRM rootkits). But Murphy promised that EAAC is designed to be “as performant and lightweight as possible. EAAC will have negligible impact on your gameplay.”

Kernel-level tools can also provide an appealing new attack surface for low-level security exploits on a user’s system. To account for that, Murphy said her team has “worked with independent, 3rd-party security and privacy assessors to validate EAAC does not degrade the security posture of your PC and to ensure strict data privacy boundaries.” She also promised daily testing and constant report monitoring to address any potential issues that pop up.

Gamers have heard these promises before. Those promises have been broken before. Chiding the public for being concerned at granting kernel-level access to their machines just to keep online gaming less ridden with cheaters is a tough sell.

Source: EA Announces New Anti-Cheat Tech That Operates At The Kernel Level | Techdirt

Firefly Aerospace reaches orbit with new Alpha rocket

A new aerospace company reached orbit with its second rocket launch and deployed multiple small satellites on Saturday.

Firefly Aerospace’s Alpha rocket lifted off from Vandenberg Space Force Base, California, in early morning darkness and arced over the Pacific.

“100% mission success,” Firefly tweeted later.

A day earlier, an attempt to launch abruptly ended when the countdown reached zero. The first-stage engines ignited but the rocket automatically aborted the liftoff.

The rocket’s payload included multiple designed for a variety of technology experiments and demonstrations, as well as educational purposes.

The mission, dubbed “To The Black,” was the company’s second demonstration flight of its entry into the market for small satellite launchers.

The first Alpha was launched from Vandenberg on Sept. 2, 2021, but did not reach orbit.

One of the four first-stage engines shut down prematurely but the rocket continued upward on three engines into the supersonic realm where it tumbled out of control.

The rocket was then intentionally destroyed by an explosive flight termination system.

Firefly Aerospace said the premature shutdown was traced to an electrical issue, but that the rocket had otherwise performed well and useful data was obtained during the nearly 2 1/2 minutes of flight.

Alpha is designed to carry payloads weighing as much as 2,579 pounds (1,170 kilograms) to low Earth .

Other competitors in the burgeoning small-launch market include Rocket Lab and Virgin Orbit, both headquartered in Long Beach, California.

Firefly Aerospace, based in Cedar Park, Texas, is also planning a larger , a vehicle for in-space operations and a lander for carrying NASA and commercial payloads to the surface of the moon.

Source: Firefly Aerospace reaches orbit with new Alpha rocket

Australian Optus telco data debacle gets worse and worse – non-existent security and no govt regulation

[…]

The alleged hacker – who threatened to sell the data unless a ransom was paid – took names, birth dates, phone numbers, addresses, and passport, healthcare and drivers’ license details from Optus, the country’s second-largest telecommunications company.

Of the 10 million people whose data was exposed, almost 3 million had crucial identity documents accessed.

Across the country, current and former customers have been rushing to change their official documents as the US Federal Bureau of Investigation joined Australia’s police, cybersecurity, and spy agencies to investigate the breach.

The Australian government is looking at overhauling privacy laws after it emerged that Optus – a subsidiary of global telecommunications firm Singtel – had kept private information for years, even after customers had cancelled their contracts.

It is also considering a European Union-style system of financial penalties for companies that fail to protect their customers.

An error-riddled message from someone claiming to be the culprit and calling themselves “Optusdata” demanded a relatively modest US$1m ransom for the data.

[…]

That demand was followed by a threat to release the records of 10,000 peopleper day until the money was paid. A batch of 10,000 files was later published online.

As Optus and the federal government dealt with the fallout, the alleged hacker had a change of mind and offered their “deepest apology”.

“Too many eyes,” they said. “We will not sale data to anyone. We cant if we even want to: personally deleted data.”

Optus chief Kelly Bayer Rosmarin initially claimed the company had fallen prey to a sophisticated attack and said the associated IP address was “out of Europe”. She said police were “all over” the apparent release of information and told ABC radio that the security breach was “not as being portrayed”.’

Experts have said Optus had an application programming interface (API) online that did not need authorisation or authentication to access customer data. “Any user could have requested any other user’s information,” Corey J Ball, senior manager of cyber security consulting for Moss Adams, said.

[…]

Optus ‘left the window open’

The cyber security minister, Clare O’Neill, has questioned why Optus had held on to that much personal information for so long.

She also scoffed at the idea the hack was sophisticated.

“What is of concern for us is how what is quite a basic hack was undertaken on Optus,” she told the ABC. “We should not have a telecommunications provider in this country which has effectively left the window open for data of this nature to be stolen.”

[…]

Asked about Rosmarin’s comments that the attack was sophisticated, O’Neill said: “Well, it wasn’t.”

On Friday, prime minister Anthony Albanese said what had happened was “unacceptable”. He said Optus had agreed to pay for replacement passports for those affected.

“Australian companies should do everything they can to protect your data,” Albanese said.

“That’s why we’re also reviewing the Privacy Act – and we’re committed to making privacy laws stronger.”

[…]

Australia currently has a $2.2m limit on corporate penalties, and there are calls for harsher penalties to encourage companies to do everything they can to protect consumers.

In the EU, the General Data Protection Regulation means companies are liable for up to 4% of the company’s revenue. Optus’s revenue last financial year was more than $7bn.

[…]

Source: The biggest hack in history: Australians scramble to change passports and driver licences after Optus telco data debacle | Optus | The Guardian

If the government has no legal incentive to tighten security and privacy, then companies won’t invest in it.

Blizzard really really wants your phone number to play its games – personal data grab and security risk

When Overwatch 2 replaces the original Overwatch on Oct. 4, players will be required to link a phone number to their Battle.net accounts. If you don’t, you won’t be able to play Overwatch 2 — even if you’ve already purchased Overwatch. The same two-factor step, called SMS Protect, will also be used on all Call of Duty: Modern Warfare 2 accounts when that game launches, and new Call of Duty: Modern Warfare accounts.

Blizzard Entertainment announced SMS Protect and other safety measures ahead of Overwatch 2’s release. Blizzard said it implemented these controls because it wanted to “protect the integrity of gameplay and promote positive behavior in Overwatch 2.”

[…]

SMS Protect is a security feature that has two purposes: to keep players accountable for what Blizzard calls “disruptive behavior,” and to protect accounts if they’re hacked. It requires all Overwatch 2 players to attach a unique phone number to their account. Blizzard said SMS Protect will target cheaters and harassers; if an account is banned, it’ll be harder for them to return to Overwatch 2. You can’t just enter any old phone number — you actually have to have access to a phone receiving texts to that number to get into your account.

[…]

Blizzard said these phone notifications will be used to approve password resets — meaning someone else won’t be able to change your password without the notification code it’ll send to your mobile phone. Blizzard said it will also send you a text message if your account is locked out after a “a suspicious login attempt,” or if your password or security features are changed.

Source: Overwatch 2 SMS Protect: What is it? Why does Blizzard require my phone number? – Polygon

So this is a piece of ‘real’ information you have to give them – but what if you move country and mobile phone? what if you lose your mobile? what if they get hacked (again) and take your number? It’s either something that does get changed or is very hard to change. It shows you that basically Blizzard sees your data as something they can grab onto for free – you are  their product. Even though the games are technically free to play, in practice they make a killing off the items you buy ingame in order to be cool

They will probably get away with it though, just as they got away with installing spyware on your PC or when you attend their events under pretty flimsy pretenses.