HP Can’t Fix Bricked Printers After Faulty Firmware Update which bricked non HP-ink cartridges

Last week the Telegraph reported that a recent firmware update to HP printers “prevents customers from using any cartridges other than those fitted with an HP chip, which are often more expensive. If the customer tries to use a non-HP ink cartridge, the printer will refuse to print.”

Some HP “Officejet” printers can disable this “dynamic security” through a firmware update, PC World reported earlier this week. But HP still defends the feature, arguing it’s “to protect HP’s innovations and intellectual property, maintain the integrity of our printing systems, ensure the best customer printing experience, and protect customers from counterfeit and third-party ink cartridges that do not contain an original HP security chip and infringe HP’s intellectual property.”

Meanwhile, Engadget now reports that “a software update Hewlett-Packard released earlier this month for its OfficeJet printers is causing some of those devices to become unusable.” After downloading the faulty software, the built-in touchscreen on an affected printer will display a blue screen with the error code 83C0000B. Unfortunately, there appears to be no way for someone to fix a printer broken in this way on their own, partly because factory resetting an HP OfficeJet requires interacting with the printer’s touchscreen display. For the moment, HP customers report the only solution to the problem is to send a broken printer back to the company for service.
BleepingComputer says the firmware update “has been bricking HP Office Jet printers worldwide since it was released earlier this month…” “Our teams are working diligently to address the blue screen error affecting a limited number of HP OfficeJet Pro 9020e printers,” HP told BleepingComputer… Since the issues surfaced, multiple threads have been started by people from the U.S., the U.K., Germany, the Netherlands, Australia, Poland, New Zealand, and France who had their printers bricked, some with more than a dozen pages of reports.

“HP has no solution at this time. Hidden service menu is not showing, and the printer is not booting anymore. Only a blue screen,” one customer said.

“I talked to HP Customer Service and they told me they don’t have a solution to fix this firmware issue, at the moment,” another added.

Source: HP Rushes to Fix Bricked Printers After Faulty Firmware Update – Slashdot

How a 35-year-old weed smoker behind 10 million scam calls made his fortune

Millions of people get phone calls from scammers and wonder who is at the other end.

Now we know: rather than someone in a call centre far away, a “bright young man” living in a lush flat in London has been unmasked as the mastermind behind so many of these calls.

Tejay Fletcher’s trial exposed how criminals with a simple website bypassed police, phone operators and banks to facilitate “fraud on an industrial scale”, scamming victims out of £100m of their hard earned cash.

Fletcher, 35, who ran the website iSpoof.cc, was jailed for 13 years and four months earlier this week following his arrest in 2019 in what is the biggest anti-fraud operation mounted in the UK.

The website allowed criminals to disguise their phone numbers in a process known as “spoofing” and trick unsuspecting people to believe they were being called by their bank or other institutions.

[…]

In 2020, he co-founded iSpoof.cc, which he built into what he called “the most sophisticated client spoofing platform available”, allowing scammers to change the number or identity displayed when they made calls so they appeared to be calling from a trusted organisation, often a bank or a bank’s fraud department.

[…]

His website was used for a large proportion of fraudulent activity in the UK – but copycats have since taken its place, and others are still falling victim to these types of scams, experts have warned.

How victims were scammed

The number of people using iSpoof swelled to 69,000 at its peak, with as many as 20 people per minute targeted by callers using the site.

More than 10 million fraudulent calls were made using iSpoof in the year to August 2022 – 3.5 million of them in the UK, the prosecution said.  More than 200,000 victims in the UK – many of them elderly – lost £43m, while global losses exceeded £100m.

For a basic subscription fee of £150 a month, users got a set number of minutes to make automated bot calls using the website or app version. They could then pay extra for additional features

[…]

Often, victims would get an automated call prompting them to confirm a transaction on an account.

The website allowed them to intercept one-time passwords, which were “ironically” introduced by banks to increase their security measures, noted John Ojakovoh, prosecuting.

iSpoof offered scammers extra features that allowed victims to type in a telephone pincode after being prompted to do so by an automated call.

Users could also pay for the ability to monitor calls live, or place calls pretending to be from an establishment that had old card details on file and wanted new ones.

Scammers could control what the automated call would say to recipients and access tools such as voice recognition.

[…]

iSpoof had a channel on Telegram, a social media platform, which it used to communicate with its customers and promote itself, the prosecution said.

The Telegram channel also displayed advertisements from companies selling bank details.

Fletcher would use it to conduct “market research”, running polls to find out which features users wanted most.

[…]

Fletcher was not particularly tech-savvy, but he used a website called freelancer.com to hire programmers to make the “building blocks” of the site

[…]

His lawyer said he had initially set out to create a simple website, but his co-founder suggested ways the technology could be made more sophisticated, which spurred him on. In 2021, he and his co-founder “fell out” and Fletcher ousted him, replacing him with three other administrators that he appeared to be supervising.

[…]

When Fletcher assumed control of iSpoof, the profits received had a “meteoric rise” from 5 Bitcoin to 117, prosecutors said. Fletcher received 64.38 Bitcoin, worth just short of £2m.

How police cracked the case

Posing as iSpoof customers, police paid for a trial subscription in Bitcoin and tested the website. They traced the money they paid to iSpoof and eventually discovered that the “lion’s share” of the profits were going to Fletcher.

They obtained a copy of the website’s server, which revealed call logs that further incriminated Fletcher and the scammers using his website.

[…]

others are also being investigated. Some 120 suspected phone scammers have been arrested, 103 of them in London.

[…]

 

Source: How a 35-year-old weed smoker behind 10 million scam calls made his fortune

Online age verification is coming, and privacy is on the chopping block

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

“When you have so many proposals floating around, it’s hard to ensure that everything is constitutionally sound and actually effective for kids,” Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), tells The Verge. “Because it’s so difficult to identify who’s a kid online, it’s going to prevent adults from accessing content online as well.”

In the US and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple US federal bills are on the table, and so are laws in other countries, like the UK’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

Online age verification isn’t a new concept. In the US, laws like the Children’s Online Privacy Protection Act (COPPA) already apply special rules to people under 13. And almost everyone who has used the internet — including major platforms like YouTube and Facebook — has checked a box to access adult content or entered a birth date to create an account. But there’s also almost nothing to stop them from faking it.

As a result, lawmakers are calling for more stringent verification methods. “From bullying and sex trafficking to addiction and explicit content, social media companies subject children and teens to a wide variety of content that can hurt them, emotionally and physically,” Senator Tom Cotton (R-AR), the backer of the Protect Kids Online Act, said. “Just as parents safeguard their kids from threats in the real world, they need the opportunity to protect their children online.”

Age verification systems fall into a handful of categories. The most common option is to rely on a third party that knows your identity — by directly validating a credit card or government-issued ID, for instance, or by signing up for a digital intermediary like Allpasstrust, the service Louisianans must use for porn access.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

As pointed out by CNIL’s report on various online age verification options, all these methods have serious flaws. CNIL notes that identifying someone’s age with a credit card would be relatively easy since the security infrastructure is already there for online payments. But some adult users — especially those with lower incomes — may not have a card, which would seriously limit their ability to access online services. The same goes for verification methods using government-issued IDs. Children can also snap up a card that’s lying around the house to verify their age.

“As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on”

Similarly, the Congressional Research Service (CRS) has expressed concerns about online age verification. In a report it updated in March, the US legislature’s in-house research institute found that many kids aged 16 to 19 might not have a government-issued ID, such as a driver’s license, that they can use to verify their age online. While it says kids could use their student ID instead, it notes that they may be easier to fake than a government-issued ID. The CRS isn’t totally on board with relying on a national digital ID system for online age verification either, as it could “raise privacy and security concerns.”

Face-based age detection might seem like a quick fix to these concerns. And unlike a credit card — or full-fledged facial identification tools — it doesn’t necessarily tell a site who you are, just whether it thinks you’re over 18.

But these systems may not accurately identify the age of a person. Yoti, the facial analysis service used by Facebook and Instagram, claims it can estimate the age of people 13 to 17 years old as under 25 with 99.93 percent accuracy while identifying kids that are six to 11 years old as under 13 with 98.35 percent accuracy. This study doesn’t include any data on distinguishing between young teens and older ones, however — a crucial element for many young people.

Although Yoti claims its system has no “discernible bias across gender or skin tone,” previous research indicates that facial recognition services are less reliable for people of color, gender-nonconforming people, and people with facial differences or asymmetry. This would, again, unfairly block certain people from accessing the internet.

It also poses a host of privacy risks, as the companies that capture facial recognition data would need to ensure that this biometric data doesn’t get stolen by bad actors. UK civil liberties group Big Brother Watch argues that “face prints’ are as sensitive as fingerprints” and that “collecting biometric data of this scale inherently puts people’s privacy at risk.” CNIL points out that you could mitigate some risks by performing facial recognition locally on a user’s device — but that doesn’t solve the broader problems.

Inferring ages based on browsing history raises even more problems. This kind of inferential system has been implemented on platforms like Facebook and TikTok, both of which use AI to detect whether a user is under the age of 13 based on their activity on the platform. That includes scanning a user’s activity for “happy birthday” messages or comments that indicate they’re too young to have an account. But the system hasn’t been explored on a larger scale — where it could involve having an AI scan your entire browsing history and estimate your age based on your searches and the sites you interact with. That would amount to large-scale digital surveillance, and CNIL outright calls the system “intrusive.” It’s not even clear how well it would work.

In France, where lawmakers are working to restrict access to porn sites, CNIL worked with Ecole Polytechnique professor Olivier Blazy to develop a solution that attempts to minimize the amount of user information sent to a website. The proposed method involves using an ephemeral “token” that sends your browser or phone a “challenge” when accessing an age-restricted website. That challenge would then get relayed to a third party that can authenticate your age, like your bank, internet provider, or a digital ID service, which would issue its approval, allowing you to access the website.

The system’s goal is to make sure a user is old enough to access a service without revealing any personal details, either to the website they’re using or the companies and governments providing the ID check. The third party “only knows you are doing an age check but not for what,” Blazy explains to The Verge, and the website would not know which service verified your age nor any of the details from that transaction.

Blazy hopes this system can prevent very young children from accessing explicit content. But even with this complex solution, he acknowledges that users in France will be able to get around the method by using a virtual private network (VPN) to conceal their location. This is a problem that plagues nearly any location-specific verification system: as long as another government lets people access a site more easily, users can route their traffic through it. The only surefire solution would be draconian crackdowns on privacy tools that would dramatically compromise freedom online.

Some governments are trying to offer a variety of options and let users pick between them. A report from the European Parliament Think Tank, an in-house department that helps shape legislation, highlights an EU “browser-based interoperable age verification method” called euCONSENT, which will allow users to verify their identity online by choosing from a network of approved third-party services. Since this would give users the ability to choose the verification they want to use, this means one service might ask a user to upload an official government document, while another might rely on facial recognition.

To privacy and civil liberties advocates, none of these solutions are ideal. Venzke tells The Verge that implementing age verification systems encourages a system that collects our data and could pave the way for more surveillance in the future. “Bills that are trying to establish inferences about how old you are or who you are based on that already existing capitalistic surveillance, are just threatening to legitimize that surveillance,” Venzke says. “As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on.”

Age verification laws “are going to face a very tough battle in court”

The Electronic Frontier Foundation, a digital rights group, similarly argues that all age verification solutions are “surveillance systems” that will “lead us further towards an internet where our private data is collected and sold by default.”

Even some strong supporters of child safety bills have expressed concerns about making age verification part of them. Senator Richard Blumenthal (D-CT), one of the backers of the Kids Online Safety Act, objected to the idea in a call with reporters earlier this month. In a statement, he tells The Verge that “age verification would require either a national database or a goldmine of private information on millions of kids in Big Tech’s hands” and that “the potential for exploitation and misuse would be huge.” (Despite this, the EFF believes that KOSA’s requirements would inevitably result in age verification mandates anyway.)

In the US, it’s unclear whether online age verification would stand up under legal scrutiny at all. The US court system has already struck down efforts to implement online age verification several times in the past. As far back as 1997, the Supreme Court ruled parts of the 1996 Communications Decency Act unconstitutional, as it imposed restrictions on “knowing transmission of obscene or indecent messages” and required age verification online. More recently, a federal court found in 2016 that a Louisiana law, which required websites that publish “material harmful to minors” verify users’ ages, “creates a chilling effect on free speech.”

Vera Eidelman, a staff attorney with ACLU, tells The Verge that existing age verification laws “are going to face a very tough battle in court.” “For the most part, requiring content providers online to verify the ages of their users is almost certainly unconstitutional, given the likelihood but it will make people uncomfortable to exercise their rights to access certain information if they have to unmask or identify themselves,” Eidelman says.

But concerns over surveillance still haven’t stopped governments around the globe, including here in the US, from pushing ahead with online age verification mandates. There are currently several bills in the pipeline in Congress that are aimed at protecting children online, including the Protecting Kids on Social Media Act, which calls for the test of a national age verification system that would block users under the age of 13 from signing up for social media. In the UK, where the heavily delayed Online Safety Bill will likely become law, porn sites would be required to verify users’ ages, while other websites would be forced to give users the option to do so as well.

Some proponents of online safety laws say they’re no different than having to hand over an ID to purchase alcohol. “We have agreed as a society not to let a 15-year-old go to a bar or a strip club,” said Laurie Schlegel, the legislator behind Louisiana’s age restriction law, after its passage. “The same protections should be in place online.” But the comparison misses vastly different implications for free speech and privacy. “When we think about bars or ordering alcohol at a restaurant, we just assume that you can hand an ID to a bouncer or a waiter, they’ll hand it back, and that’s the end of it,” Venzke adds. “Problem is, there’s no infrastructure on the internet right now to [implement age verification] in a safe, secure, private way that doesn’t chill people’s ability to get to constitutionally protected speech.”

Most people also spend a relatively small amount of their time in real-world adults-only spaces, while social media and online communications tools are ubiquitous ways of finding information and staying in touch with friends and family. Even sites with sexually explicit content — the target of Louisiana’s bill — could be construed to include sites offering information about sexual health and LGBTQ resources, despite claims by lawmakers that this won’t happen.

Even if many of these rules are shot down, the way we use the internet may never be the same again. With age checks awaiting us online, some people may find themselves locked out of increasingly large numbers of platforms — leaving the online world more closed-off than ever.

Source: Online age verification is coming, and privacy is on the chopping block – The Verge

The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’, apparently made by blind judges

The Supreme Court has ruled that Andy Warhol has infringed on the copyright of Lynn Goldsmith, the photographer who took the image that he used for his famous silkscreen of the musician Prince. Goldsmith won the justices over 7-2, disagreeing with Warhol’s camp that his work was transformative enough to prevent any copyright claims. In the majority opinion written by Justice Sonia Sotomayor, she noted that “Goldsmith’s original works, like those of other photographers, are entitled to copyright protection, even against famous artists.”

Goldsmith’s story goes as far back as 1984, when Vanity Fair licensed her Prince photo for use as an artist reference. The photographer received $400 for a one-time use of her photograph, which Warhol then used as the basis for a silkscreen that the magazine published. Warhol then created 15 additional works based on her photo, one of which was sold to Condé Nast for another magazine story about Prince. The Andy Warhol Foundation (AWF) — the artist had passed away by then — got $10,000 it, while Goldsmith didn’t get anything.

Typically, the use of copyrighted material for a limited and “transformative” purpose without the copyright holder’s permission falls under “fair use.” But what passes as “transformative” use can be vague, and that vagueness has led to numerous lawsuits. In this particular case, the court has decided that adding “some new expression, meaning or message” to the photograph does not constitute “transformative use.” Sotomayor said Goldsmith’s photo and Warhol’s silkscreen serve “substantially the same purpose.”

Indeed, the decision could have far ranging implications for fair use and could influence future cases on what constitutes as transformative work. Especially now that we’re living in the era of content creators who could be taking inspiration from existing music and art. As CNN reports, Justice Elena Kagan strongly disagreed with her fellow justices, arguing that the decision would stifle creativity. She said the justices mostly just cared about the commercial purpose of the work and did not consider that the photograph and the silkscreen have different “aesthetic characteristics” and did not “convey the same meaning.”

“Both Congress and the courts have long recognized that an overly stringent copyright regime actually stifles creativity by preventing artists from building on the works of others. [The decision will] impede new art and music and literature, [and it will] thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer,” she wrote.

The justices who wrote the majority opinion, however, believe that it “will not impoverish our world to require AWF to pay Goldsmith a fraction of the proceeds from its reuse of her copyrighted work. Recall, payments like these are incentives for artists to create original works in the first place.”

Source: The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’

Well, the two pictures are above. How you can argue that they are the same thing is quite beyond me.