Online age verification is coming, and privacy is on the chopping block

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

“When you have so many proposals floating around, it’s hard to ensure that everything is constitutionally sound and actually effective for kids,” Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), tells The Verge. “Because it’s so difficult to identify who’s a kid online, it’s going to prevent adults from accessing content online as well.”

In the US and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple US federal bills are on the table, and so are laws in other countries, like the UK’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

Online age verification isn’t a new concept. In the US, laws like the Children’s Online Privacy Protection Act (COPPA) already apply special rules to people under 13. And almost everyone who has used the internet — including major platforms like YouTube and Facebook — has checked a box to access adult content or entered a birth date to create an account. But there’s also almost nothing to stop them from faking it.

As a result, lawmakers are calling for more stringent verification methods. “From bullying and sex trafficking to addiction and explicit content, social media companies subject children and teens to a wide variety of content that can hurt them, emotionally and physically,” Senator Tom Cotton (R-AR), the backer of the Protect Kids Online Act, said. “Just as parents safeguard their kids from threats in the real world, they need the opportunity to protect their children online.”

Age verification systems fall into a handful of categories. The most common option is to rely on a third party that knows your identity — by directly validating a credit card or government-issued ID, for instance, or by signing up for a digital intermediary like Allpasstrust, the service Louisianans must use for porn access.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

As pointed out by CNIL’s report on various online age verification options, all these methods have serious flaws. CNIL notes that identifying someone’s age with a credit card would be relatively easy since the security infrastructure is already there for online payments. But some adult users — especially those with lower incomes — may not have a card, which would seriously limit their ability to access online services. The same goes for verification methods using government-issued IDs. Children can also snap up a card that’s lying around the house to verify their age.

“As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on”

Similarly, the Congressional Research Service (CRS) has expressed concerns about online age verification. In a report it updated in March, the US legislature’s in-house research institute found that many kids aged 16 to 19 might not have a government-issued ID, such as a driver’s license, that they can use to verify their age online. While it says kids could use their student ID instead, it notes that they may be easier to fake than a government-issued ID. The CRS isn’t totally on board with relying on a national digital ID system for online age verification either, as it could “raise privacy and security concerns.”

Face-based age detection might seem like a quick fix to these concerns. And unlike a credit card — or full-fledged facial identification tools — it doesn’t necessarily tell a site who you are, just whether it thinks you’re over 18.

But these systems may not accurately identify the age of a person. Yoti, the facial analysis service used by Facebook and Instagram, claims it can estimate the age of people 13 to 17 years old as under 25 with 99.93 percent accuracy while identifying kids that are six to 11 years old as under 13 with 98.35 percent accuracy. This study doesn’t include any data on distinguishing between young teens and older ones, however — a crucial element for many young people.

Although Yoti claims its system has no “discernible bias across gender or skin tone,” previous research indicates that facial recognition services are less reliable for people of color, gender-nonconforming people, and people with facial differences or asymmetry. This would, again, unfairly block certain people from accessing the internet.

It also poses a host of privacy risks, as the companies that capture facial recognition data would need to ensure that this biometric data doesn’t get stolen by bad actors. UK civil liberties group Big Brother Watch argues that “face prints’ are as sensitive as fingerprints” and that “collecting biometric data of this scale inherently puts people’s privacy at risk.” CNIL points out that you could mitigate some risks by performing facial recognition locally on a user’s device — but that doesn’t solve the broader problems.

Inferring ages based on browsing history raises even more problems. This kind of inferential system has been implemented on platforms like Facebook and TikTok, both of which use AI to detect whether a user is under the age of 13 based on their activity on the platform. That includes scanning a user’s activity for “happy birthday” messages or comments that indicate they’re too young to have an account. But the system hasn’t been explored on a larger scale — where it could involve having an AI scan your entire browsing history and estimate your age based on your searches and the sites you interact with. That would amount to large-scale digital surveillance, and CNIL outright calls the system “intrusive.” It’s not even clear how well it would work.

In France, where lawmakers are working to restrict access to porn sites, CNIL worked with Ecole Polytechnique professor Olivier Blazy to develop a solution that attempts to minimize the amount of user information sent to a website. The proposed method involves using an ephemeral “token” that sends your browser or phone a “challenge” when accessing an age-restricted website. That challenge would then get relayed to a third party that can authenticate your age, like your bank, internet provider, or a digital ID service, which would issue its approval, allowing you to access the website.

The system’s goal is to make sure a user is old enough to access a service without revealing any personal details, either to the website they’re using or the companies and governments providing the ID check. The third party “only knows you are doing an age check but not for what,” Blazy explains to The Verge, and the website would not know which service verified your age nor any of the details from that transaction.

Blazy hopes this system can prevent very young children from accessing explicit content. But even with this complex solution, he acknowledges that users in France will be able to get around the method by using a virtual private network (VPN) to conceal their location. This is a problem that plagues nearly any location-specific verification system: as long as another government lets people access a site more easily, users can route their traffic through it. The only surefire solution would be draconian crackdowns on privacy tools that would dramatically compromise freedom online.

Some governments are trying to offer a variety of options and let users pick between them. A report from the European Parliament Think Tank, an in-house department that helps shape legislation, highlights an EU “browser-based interoperable age verification method” called euCONSENT, which will allow users to verify their identity online by choosing from a network of approved third-party services. Since this would give users the ability to choose the verification they want to use, this means one service might ask a user to upload an official government document, while another might rely on facial recognition.

To privacy and civil liberties advocates, none of these solutions are ideal. Venzke tells The Verge that implementing age verification systems encourages a system that collects our data and could pave the way for more surveillance in the future. “Bills that are trying to establish inferences about how old you are or who you are based on that already existing capitalistic surveillance, are just threatening to legitimize that surveillance,” Venzke says. “As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on.”

Age verification laws “are going to face a very tough battle in court”

The Electronic Frontier Foundation, a digital rights group, similarly argues that all age verification solutions are “surveillance systems” that will “lead us further towards an internet where our private data is collected and sold by default.”

Even some strong supporters of child safety bills have expressed concerns about making age verification part of them. Senator Richard Blumenthal (D-CT), one of the backers of the Kids Online Safety Act, objected to the idea in a call with reporters earlier this month. In a statement, he tells The Verge that “age verification would require either a national database or a goldmine of private information on millions of kids in Big Tech’s hands” and that “the potential for exploitation and misuse would be huge.” (Despite this, the EFF believes that KOSA’s requirements would inevitably result in age verification mandates anyway.)

In the US, it’s unclear whether online age verification would stand up under legal scrutiny at all. The US court system has already struck down efforts to implement online age verification several times in the past. As far back as 1997, the Supreme Court ruled parts of the 1996 Communications Decency Act unconstitutional, as it imposed restrictions on “knowing transmission of obscene or indecent messages” and required age verification online. More recently, a federal court found in 2016 that a Louisiana law, which required websites that publish “material harmful to minors” verify users’ ages, “creates a chilling effect on free speech.”

Vera Eidelman, a staff attorney with ACLU, tells The Verge that existing age verification laws “are going to face a very tough battle in court.” “For the most part, requiring content providers online to verify the ages of their users is almost certainly unconstitutional, given the likelihood but it will make people uncomfortable to exercise their rights to access certain information if they have to unmask or identify themselves,” Eidelman says.

But concerns over surveillance still haven’t stopped governments around the globe, including here in the US, from pushing ahead with online age verification mandates. There are currently several bills in the pipeline in Congress that are aimed at protecting children online, including the Protecting Kids on Social Media Act, which calls for the test of a national age verification system that would block users under the age of 13 from signing up for social media. In the UK, where the heavily delayed Online Safety Bill will likely become law, porn sites would be required to verify users’ ages, while other websites would be forced to give users the option to do so as well.

Some proponents of online safety laws say they’re no different than having to hand over an ID to purchase alcohol. “We have agreed as a society not to let a 15-year-old go to a bar or a strip club,” said Laurie Schlegel, the legislator behind Louisiana’s age restriction law, after its passage. “The same protections should be in place online.” But the comparison misses vastly different implications for free speech and privacy. “When we think about bars or ordering alcohol at a restaurant, we just assume that you can hand an ID to a bouncer or a waiter, they’ll hand it back, and that’s the end of it,” Venzke adds. “Problem is, there’s no infrastructure on the internet right now to [implement age verification] in a safe, secure, private way that doesn’t chill people’s ability to get to constitutionally protected speech.”

Most people also spend a relatively small amount of their time in real-world adults-only spaces, while social media and online communications tools are ubiquitous ways of finding information and staying in touch with friends and family. Even sites with sexually explicit content — the target of Louisiana’s bill — could be construed to include sites offering information about sexual health and LGBTQ resources, despite claims by lawmakers that this won’t happen.

Even if many of these rules are shot down, the way we use the internet may never be the same again. With age checks awaiting us online, some people may find themselves locked out of increasingly large numbers of platforms — leaving the online world more closed-off than ever.

Source: Online age verification is coming, and privacy is on the chopping block – The Verge

The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’, apparently made by blind judges

The Supreme Court has ruled that Andy Warhol has infringed on the copyright of Lynn Goldsmith, the photographer who took the image that he used for his famous silkscreen of the musician Prince. Goldsmith won the justices over 7-2, disagreeing with Warhol’s camp that his work was transformative enough to prevent any copyright claims. In the majority opinion written by Justice Sonia Sotomayor, she noted that “Goldsmith’s original works, like those of other photographers, are entitled to copyright protection, even against famous artists.”

Goldsmith’s story goes as far back as 1984, when Vanity Fair licensed her Prince photo for use as an artist reference. The photographer received $400 for a one-time use of her photograph, which Warhol then used as the basis for a silkscreen that the magazine published. Warhol then created 15 additional works based on her photo, one of which was sold to Condé Nast for another magazine story about Prince. The Andy Warhol Foundation (AWF) — the artist had passed away by then — got $10,000 it, while Goldsmith didn’t get anything.

Typically, the use of copyrighted material for a limited and “transformative” purpose without the copyright holder’s permission falls under “fair use.” But what passes as “transformative” use can be vague, and that vagueness has led to numerous lawsuits. In this particular case, the court has decided that adding “some new expression, meaning or message” to the photograph does not constitute “transformative use.” Sotomayor said Goldsmith’s photo and Warhol’s silkscreen serve “substantially the same purpose.”

Indeed, the decision could have far ranging implications for fair use and could influence future cases on what constitutes as transformative work. Especially now that we’re living in the era of content creators who could be taking inspiration from existing music and art. As CNN reports, Justice Elena Kagan strongly disagreed with her fellow justices, arguing that the decision would stifle creativity. She said the justices mostly just cared about the commercial purpose of the work and did not consider that the photograph and the silkscreen have different “aesthetic characteristics” and did not “convey the same meaning.”

“Both Congress and the courts have long recognized that an overly stringent copyright regime actually stifles creativity by preventing artists from building on the works of others. [The decision will] impede new art and music and literature, [and it will] thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer,” she wrote.

The justices who wrote the majority opinion, however, believe that it “will not impoverish our world to require AWF to pay Goldsmith a fraction of the proceeds from its reuse of her copyrighted work. Recall, payments like these are incentives for artists to create original works in the first place.”

Source: The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’

Well, the two pictures are above. How you can argue that they are the same thing is quite beyond me.

Automakers Are Making Basic Car Functions A Costly Subscription Service… Whether You Like It Or Not

Automakers are increasingly obsessed with turning everything into a subscription service in a bid to boost quarterly returns. We’ve noted how BMW has embraced making heated seats and other features already in your car a subscription service, and Mercedes has been making better gas and EV engine performance something you have to pay extra for — even if your existing engine already technically supports it.

There are several problems here. One, most of the tech they want to charge a recurring fee to use is already embedded in the car you own. And its cost is already rolled into the retail cost you’ve paid. They’re effectively disabling technology you already own, then charging you a recurring additional monthly fee just to re-enable it. It’s a Cory Doctorow nightmare dressed up as innovation.

The other problem: absolutely nobody wants this shit. Surveys have already shown how consumers widely despise paying their car maker a subscription fee for pretty much anything, whether that’s an in-car 5G hotspot or movie rentals via your car’s screen. Now another new study indicates that consumers are unsurprisingly opposed to this new effort to expand subscription features:

new study from Cox Automotive this week found that 75% of respondents agreed with the statement that “features on demand will allow automakers to make more money.” And 69% of respondents said that if certain features were available only via subscription for a particular brand, they would likely shop elsewhere.

[…]

if the industry does this persistently enough, over a long enough time frame, the window of what dictates “acceptable” automaker behavior shifts in their favor, resulting in opinions like this one:

“I don’t think [features on demand] is going away, and also as the cars get more and more sophisticated, get more and more functionality, then it just feels like a natural progression,” Edmund’s Weaver says, also noting he too has gotten used to these add-on features, and their costs, for his personal vehicle.

There’s a whole bunch of additional unintentional consequences of this kind of shift. Right to repair folks will be keen on breaking down these phony barriers, and automakers will increasingly respond by doing things like making enabling tech you already own and paid for a warranty violation.

[…]

Source: Automakers Are Making Basic Car Functions A Costly Subscription Service… Whether You Like It Or Not | Techdirt

It’s not just BMW, Mercedes and many other companies are getting into this game. The thing is, if it’s a service that requires ongoing work (eg collecting road data for navigation services or traffic cam data for speed warnings etc) then a subscription is fine. But if it’s something already built into your car that requires a subscription or extra money to enable, well, then you’ve already paid for it and are the owner of it. Having a carmaker disable it until you pony up again for it is a ridiculous.

Logitech partners with iFixit for self-repairs

Hanging on to your favorite wireless mouse just got a little easier thanks to a new partnership between Logitech and DIY repair specialists iFixit. The two companies are working together to reduce unnecessary e-waste and help customers repair their own out-of-warranty Logitech hardware by supplying spare parts, batteries, and repair guides for “select products.”

Everything will eventually be housed in the iFixit Logitech Repair Hub, with parts available to purchase as needed or within “Fix Kits” that provide everything needed to complete the repair, such as tools and precision bit sets.

Starting “this summer,” Logitech’s MX Master and MX Anywhere mouse models will be the first products to receive spare parts. Pricing information has not been disclosed yet, and Logitech hasn’t mentioned any other devices that will receive the iFixit genuine replacement parts and repair guide treatment.

[…]

Source: Logitech partners with iFixit for self-repairs

This sounds like a good idea, and I hope it is, but who else can supply repair kits? If it’s only IFixit, then aren’t we swapping one monopoly for another? It’s a kind of symbolic fixability. I love iFixit, they are great and I really like what they have done in the past, but I really hope that it’s not the intent to create a reparation duopoly to which big companies can point and say: “see, we are not a monopoly” whilst keeping prices artificially high.

Human DNA can be pulled from the air: A Boon For Science, While Terrifying Others

Environmental DNA sampling is nothing new. Rather than having to spot or catch an animal, instead the DNA from the traces they leave can be sampled, giving clues about their genetic diversity, their lineage (e.g. via mitochondrial DNA) and the population’s health. What caught University of Florida (UoF) researchers by surprise while they were using environmental DNA sampling to study endangered sea turtles, was just how much human DNA they found in their samples. This led them to perform a study on the human DNA they sampled in this way, with intriguing implications.

Ever since genetic sequencing became possible there have been many breakthroughs that have made it more precise, cheaper and more versatile. The argument by these UoF researchers in their paper in Nature Ecology & Evolution is that although there is a lot of potential in sampling human environmental DNA (eDNA) to study populations much like is done today already with wastewater sampling, only more universally. This could have great benefits in studying human populations much how we monitor other animal species already using their eDNA and similar materials that are discarded every day as a part of normal biological function.

The researchers were able to detect various genetic issues in the human eDNA they collected, demonstrating the viability of using it as a population health monitoring tool. The less exciting fallout of their findings was just how hard it is to prevent contamination of samples with human DNA, which could possibly affect studies. Meanwhile the big DNA elephant in the room is that of individual level tracking, which is something that’s incredibly exciting to researchers who are monitoring wild animal populations. Unlike those animals, however, homo sapiens are unique in that they’d object to such individual-level eDNA-based monitoring.

What the full implications of such new tools will be is hard to say, but they’re just one of the inevitable results as our genetic sequencing methods improve and humans keep shedding their DNA everywhere.

Source: Human DNA Is Everywhere: A Boon For Science, While Terrifying Others | Hackaday

The ‘invisible’ cellulose coatings that mitigate surface transmission of pathogens (kills covid on door handles)

Research has shown that a thin cellulose film can inactivate the SARS-CoV-2 virus within minutes, inhibit the growth of bacteria including E. coli, and mitigate contact transfer of pathogens.

The coating consists of a thin film of cellulose fiber that is invisible to the , and is abrasion-resistant under dry conditions, making it suitable for use on high traffic objects such as door handles and handrails.

The coating was developed by scientific teams from the University of Birmingham, Cambridge University, and FiberLean Technologies, who worked on a project to formulate treatments for glass, metal or laminate surfaces that would deliver long-lasting protection against the COVID-19 virus.

[…]

a coating made from micro-fibrillated cellulose (MFC)

[…]

The COVID-19 virus is known to remain active for several days on surfaces such as plastic and stainless steel, but for only a few hours on newspaper.

[…]

The researchers found that the porous nature of the film plays a significant role: it accelerates the evaporation rate of liquid , and introduces an imbalanced osmotic pressure across bacteria membrane.

They then tested whether the coating could inhibit surface transmission of SARS-CoV-2. Here they found a three-fold reduction of infectivity when droplets containing the were left on the coating for 5 minutes, and, after 10 minutes, the infectivity fell to zero.

[…]

Professor Zhang commented, “The risk of surface transmission, as opposed to aerosol transmission, comes from large droplets which remain infective if they land on hard surfaces, where they can be transferred by touch. This surface coating technology uses sustainable materials and could potentially be used in conjunction with other antimicrobial actives to deliver a long-lasting and slow-release antimicrobial effect.”

The researchers confirmed the stability of the coating by mechanical scraping tests, where the showed no noticeable damage when dry, but easy removal from the surface when wetted, making it convenient and suitable for daily cleaning and disinfection practice.

The paper is published in the journal ACS Applied Materials & Interfaces.

More information: Shaojun Qi et al, Porous Cellulose Thin Films as Sustainable and Effective Antimicrobial Surface Coatings, ACS Applied Materials & Interfaces (2023). DOI: 10.1021/acsami.2c23251

Source: The ‘invisible’ cellulose coatings that mitigate surface transmission of pathogens

LLM emergent behavior written off as rubbish – small models work fine but are measured poorly

[…] As defined in academic studies, “emergent” abilities refers to “abilities that are not present in smaller-scale models, but which are present in large-scale models,” as one such paper puts it. In other words, immaculate injection: increasing the size of a model infuses it with some amazing ability not previously present.

[…]

those emergent abilities in AI models are a load of rubbish, say computer scientists at Stanford.

Flouting Betteridge’s Law of Headlines, Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo answer the question posed by their paper, Are Emergent Abilities of Large Language Models a Mirage?, in the affirmative.

[…]

When industry types talk about emergent abilities, they’re referring to capabilities that seemingly come out of nowhere for these models, as if something was being awakened within them as they grow in size. The thinking is that when these LLMs reach a certain scale, the ability to summarize text, translate languages, or perform complex calculations, for example, can emerge unexpectedly.

[…]

Stanford’s Schaeffer, Miranda, and Koyejo propose that when researchers are putting models through their paces and see unpredictable responses, it’s really due to poorly chosen methods of measurement rather than a glimmer of actual intelligence.

Most (92 percent) of the unexpected behavior detected, the team observed, was found in tasks evaluated via BIG-Bench, a crowd-sourced set of more than 200 benchmarks for evaluating large language models.

One test within BIG-Bench highlighted by the university trio is Exact String Match. As the name suggests, this checks a model’s output to see if it exactly matches a specific string without giving any weight to nearly right answers. The documentation even warns:

The EXACT_STRING_MATCH metric can lead to apparent sudden breakthroughs because of its inherent all-or-nothing discontinuity. It only gives credit for a model output that exactly matches the target string. Examining other metrics, such as BLEU, BLEURT, or ROUGE, can reveal more gradual progress.

The issue with using such pass-or-fail tests to infer emergent behavior, the researchers say, is that nonlinear output and lack of data in smaller models creates the illusion of new skills emerging in larger ones. Simply put, a smaller model may be very nearly right in its answer to a question, but because it is evaluated using the binary Exact String Match, it will be marked wrong whereas a larger model will hit the target and get full credit.

It’s a nuanced situation. Yes, larger models can summarize text and translate languages. Yes, larger models will generally perform better and can do more than smaller ones, but their sudden breakthrough in abilities – an unexpected emergence of capabilities – is an illusion: the smaller models are potentially capable of the same sort of thing but the benchmarks are not in their favor. The tests favor larger models, leading people in the industry to assume the larger models enjoy a leap in capabilities once they get to a certain size.

In reality, the change in abilities is more gradual as you scale up or down. The upshot for you and I is that applications may not need a huge but super powerful language model; a smaller one that is cheaper and faster to customize, test, and run may do the trick.

[…]

In short, the supposed emergent abilities of LLMs arise from the way the data is being analyzed and not from unforeseen changes to the model as it scales. The researchers emphasize they’re not precluding the possibility of emergent behavior in LLMs; they’re simply stating that previous claims of emergent behavior look like ill-considered metrics.

[…]

Source: LLM emergent behavior written off as ‘a mirage’ by study • The Register

Fallout continues from fake net neutrality comments

Three digital marketing firms have agreed to pay $615,000 to resolve allegations that they submitted at least 2.4 million fake public comments to influence American internet policy.

New York Attorney General Letitia James announced last week the agreement with LCX, Lead ID, and Ifficient, each of which was found to have fabricated public comments submitted in 2017 to convince the Federal Communications Commission (FCC) to repeal net neutrality.

Net neutrality refers to a policy requiring internet service providers to treat people’s internet traffic more or less equally, which some ISPs opposed because they would have preferred to act as gatekeepers in a pay-to-play regime. The neutrality rules were passed in 2015 at a time when it was feared large internet companies would eventually eradicate smaller rivals by bribing ISPs to prioritize their connections and downplay the competition.

[…]

in 2017 Ajit Pai, appointed chairman of the FCC by the Trump administration, successfully spearheaded an effort to tear up those rules and remake US net neutrality so they’d be more amenable to broadband giants. And there was a public comment period on initiative.

It was a massive sham. The Office of the Attorney General (OAG) investigation [PDF] found that 18 million of 22 million comments submitted to the FCC were fake, both for and against net neutrality.

The broadband industry’s attempt in 2017 to have the FCC repeal the net neutrality rules accounted for more than 8.5 million fake comments at a cost of $4.2 million.

“The effort was intended to create the appearance of widespread grassroots opposition to existing net neutrality rules, which — as described in an internal campaign planning document — would help provide ‘cover’ for the FCC’s proposed repeal,” the report explained.

The report also stated an unidentified 19-year-old was responsible for more than 7.7 million of 9.3 million fake comments opposing the repeal of net neutrality. These were generated using software that fabricated identities. The origin of the other 1.6 million fake comments is unknown.

LCX, Lead ID, and Ifficient were said to have taken a different approach, one that allegedly involved reuse of old consumer data from different marketing or advocacy campaigns, purchased or obtained through misrepresentation. LCX is said to have obtained some of its data from “a large data breach file found on the internet.”

[…]

This was the second such agreement for the state of New York, which two years ago got a different set of digital marketing firms – Fluent, Opt-Intelligence, and React2Media – to pay $4.4 million to disgorge funds earned for distributing about 5.4 million fake public comments related to the FCC’s net neutrality process.

[…]

astroturfing – corporate messaging masquerading as grassroots public opinion.

[…]

“no federal laws or regulations exist that limit a public relations firm’s ability to engage in astroturfing.”

[…]

Source: Fallout continues from ‘fake net neutrality comment’ claims • The Register

Ex-Ubiquiti engineer behind “breathtaking” data theft, attempts to frame co-workers, calls it a security drill, assaults stock price: 6-year prison term

An ex-Ubiquiti engineer, Nickolas Sharp, was sentenced to six years in prison yesterday after pleading guilty in a New York court to stealing tens of gigabytes of confidential data, demanding a $1.9 million ransom from his former employer, and then publishing the data publicly when his demands were refused.

[…]

In a court document, Sharp claimed that Ubiquiti CEO Robert Pera had prevented Sharp from “resolving outstanding security issues,” and Sharp told the judge that this led to an “idiotic hyperfixation” on fixing those security flaws.

However, even if that was Sharp’s true motivation, Failla did not accept his justification of his crimes, which include wire fraud, intentionally damaging protected computers, and lying to the FBI.

“It was not up to Mr. Sharp to play God in this circumstance,” Failla said.

US attorney for the Southern District of New York, Damian Williams, argued that Sharp was not a “cybersecurity vigilante” but an “inveterate liar and data thief” who was “presenting a contrived deception to the Court that this entire offense was somehow just a misguided security drill.” Williams said that Sharp made “dozens, if not hundreds, of criminal decisions” and even implicated innocent co-workers to “divert suspicion.” Sharp also had already admitted in pre-sentencing that the cyber attack was planned for “financial gain.” Williams said Sharp did it seemingly out of “pure greed” and ego because Sharp “felt mistreated”—overworked and underpaid—by the IT company, Williams said.

Court documents show that Ubiquiti spent “well over $1.5 million dollars and hundreds of hours of employee and consultant time” trying to remediate what Williams described as Sharp’s “breathtaking” theft. But the company lost much more than that when Sharp attempted to conceal his crimes—posing as a whistleblower, planting false media reports, and contacting US and foreign regulators to investigate Ubiquiti’s alleged downplaying of the data breach. Within a single day after Sharp planted false reports, stocks plummeted, causing Ubiquiti to lose over $4 billion in market capitalization value, court documents show.

[…]

In his sentencing memo, Williams said that Sharp’s characterization of the cyberattack as a security drill does not align with the timeline of events leading up to his arrest in December 2021. The timeline instead appears to reveal a calculated plan to conceal the data theft and extort nearly $2 million from Ubiquiti.

Sharp began working as a Ubiquiti senior software engineer and “Cloud Lead” in 2018, where he was paid $250,000 annually and had tasks including software development and cloud infrastructure security. About two years into the gig, Sharp purchased a VPN subscription to Surfshark in July 2020 and then seemingly began hunting for another job. By December 9, 2020, he’d lined up another job. The next day, he used his Ubiquiti security credentials to test his plan to copy data repositories while masking his IP address by using Surfshark.

Less than two weeks later, Sharp executed his plan, and he might have gotten away with it if not for a “slip-up” he never could have foreseen. While copying approximately 155 data repositories, an Internet outage temporarily disabled his VPN. When Internet service was restored, unbeknownst to Sharp, Ubiquiti logged his home IP address before the VPN tool could turn back on.

Two days later, Sharp was so bold as to ask a senior cybersecurity employee if he could be paid for submitting vulnerabilities to the company’s HackerOne bug bounty program, which seemed suspicious, court documents show. Still unaware of his slip-up, through December 26, 2020, Sharp continued to access company data using Surfshark, actively covering his trails by deleting evidence of his activity within a day and modifying evidence to make it seem like other Ubiquiti employees were using the credentials he used during the attack.

Sharp only stopped accessing the data when other employees discovered evidence of the attack on December 28, 2020. Seemingly unfazed, Sharp joined the team investigating the attack before sending his ransom email on January 7, 2021.

Ubiquiti chose not to pay the ransom and instead got the FBI involved. Soon after, Sharp’s slip-up showing his home IP put the FBI on his trail. At work, Sharp suggested his home IP was logged in an attempt to frame him, telling coworkers, “I’d be pretty fucking incompetent if I left my IP in [the] thing I requested, downloaded, and uploaded” and saying that would be the “shittiest cover up ever lol.”

While the FBI analyzed all of Sharp’s work devices, Sharp wiped and reset the laptop he used in the attack but brazenly left the laptop at home, where it was seized during a warranted FBI search in March 2021.

After the FBI search, Sharp began posing as a whistleblower, contacting journalists and regulators to falsely warn that Ubiquiti’s public disclosure and response to the cyberattack were insufficient. He said the company had deceived customers and downplayed the severity of the breach, which was actually “catastrophic.” The whole time, Williams noted in his sentencing memo, Sharp knew that the attack had been accomplished using his own employee credentials.

This was “far from a hacker targeting a vulnerability open to third parties,” Williams said. “Sharp used credentials legitimately entrusted to him by the company, to steal data and cover his tracks.”

“At every turn, Sharp acted consistent with the unwavering belief that his sophistication and cunning were sufficient to deceive others and conceal his crime,” Williams said.

[…]

Source: Ex-Ubiquiti engineer behind “breathtaking” data theft gets 6-year prison term | Ars Technica

Fake scientific papers are alarmingly common and becoming more so

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.

[…]

Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.

Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.

[…]

To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.

Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.

[…]

STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.

[…]

Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.

The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.

[…]

Source: Fake scientific papers are alarmingly common | Science | AAAS

A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.

Millions of mobile phones come pre-infected with malware

Miscreants have infected millions of Androids worldwide with malicious firmware before the devices even shipped from their factories, according to Trend Micro researchers at Black Hat Asia.

This hardware is mainly cheapo Android mobile devices, though smartwatches, TVs, and other things are caught up in it.

The gadgets have their manufacturing outsourced to an original equipment manufacturer (OEM). That outsourcing makes it possible for someone in the manufacturing pipeline – such as a firmware supplier – to infect products with malicious code as they ship out, the researchers said.

This has been going on for a while, we think; for example, we wrote about a similar headache in 2017. The Trend Micro folks characterized the threat today as “a growing problem for regular users and enterprises.” So, consider this a reminder and a heads-up all in one.

[…]

This insertion of malware began as the price of mobile phone firmware dropped, we’re told. Competition between firmware distributors became so furious that eventually the providers could not charge money for their product.

“But of course there’s no free stuff,” said Yarochkin, who explained that, as a result of this cut-throat situation, firmware started to come with an undesirable feature – silent plugins. The team analyzed dozens of firmware images looking for malicious software. They found over 80 different plugins, although many of those were not widely distributed.

The plugins that were the most impactful were those that had a business model built around them, were sold on the underground, and marketed in the open on places like Facebook, blogs, and YouTube.

The objective of the malware is to steal info or make money from information collected or delivered.

The malware turns the devices into proxies which are used to steal and sell SMS messages, take over social media and online messaging accounts, and used as monetization opportunities via adverts and click fraud.

One type of plugin, proxy plugins, allow the criminal to rent out devices for up to around five minutes at a time. For example, those renting the control of the device could acquire data on keystrokes, geographical location, IP address and more.

[…]

Through telemetry data, the researchers estimated that at least millions of infected devices exist globally, but are centralized in Southeast Asia and Eastern Europe. A statistic self-reported by the criminals themselves, said the researchers, was around 8.9 million.

As for where the threats are coming from, the duo wouldn’t say specifically, although the word “China” showed up multiple times in the presentation, including in an origin story related to the development of the dodgy firmware. Yarochkin said the audience should consider where most of the world’s OEMs are located and make their own deductions.

“Even though we possibly might know the people who build the infrastructure for this business, its difficult to pinpoint how exactly the this infection gets put into this mobile phone because we don’t know for sure at what moment it got into the supply chain,“ said Yarochkin.

The team confirmed the malware was found in the phones of at least 10 vendors, but that there was possibly around 40 more affected. For those seeking to avoid infected mobile phones, they could go some way of protecting themselves by going high end.

[…]

“Big brands like Samsung, like Google took care of their supply chain security relatively well, but for threat actors, this is still a very lucrative market,” said Yarochkin.

Source: Millions of mobile phones come pre-infected with malware • The Register

Black hat presentation: Behind the Scenes: How Criminal Enterprises Pre-infect Millions of Mobile Devices

HP disables customers’ printers if they use ink cartridges from cheaper rivals

Hewlett-Packard, or HP, has sparked fury after issuing a recent “firmware” update which blocks customers from using cheaper, non-HP ink cartridges in its printers.

Customers’ devices were remotely updated in line with new terms which mean their printers will not work unless they are fitted with approved ink cartridges.

It prevents customers from using any cartridges other than those fitted with an HP chip, which are often more expensive. If the customer tries to use a non-HP ink cartridge, the printer will refuse to print.

HP printers used to display a warning when a “third-party” ink cartridge was inserted, but now printers will simply refuse to print altogether.

[…]

This is not the first time HP has angered its customers by blocking the use of other ink cartridges.

The firm has been forced to pay out millions in compensation to customers in America, Australia and across Europe since it first introduced dynamic security measures back in 2016.

Just last year the company paid $1.35m (£1m) to consumers in Belgium, Italy, Spain and Portugal who had bought printers not knowing they were equipped with the cartridge-blocking feature.

Last year consumer advocates called on the Competition and Markets Authority to investigate whether branded ink costs and “dynamic security” measures were fair to consumers, after finding that lesser-known brands of ink cartridges offered better value for money than major names.

The consumer group Which? said manufacturers were “actively blocking customers from exerting their right to choose the cheapest ink and therefore get a better deal”.

[…]

Source: HP disables customers’ printers if they use ink cartridges from cheaper rivals

That’s because the printer is not what they are selling you, it’s the stupidly overpriced ink. So no, you don’t own what you bought, they are saying.

European Media Freedom Act is a free pass to spread fake news, directly goes against DSA

“Disinformation is a threat to our democracies” is a statement with which virtually every political group in the European Parliament agrees. Many political statements have been made on the subject calling for more to be done to counter disinformation, especially since the Russian attack on Ukraine.

As part of that effort, the EU recently adopted the Digital Services Act (DSA), the legislation that many hope will provide the necessary regulatory framework to – at least partially – tackle the disinformation challenge. Unfortunately, there is a danger we might end up not seeing the positive results that the DSA promises to bring.

There are attempts to undermine the DSA with exemptions for media content in the European Media Freedom Act (EMFA), currently on the EU legislators’ table. This contains a measure which would effectively reverse the DSA provisions and prevent platforms like Twitter and Facebook from moderating content coming from anyone claiming to be a ‘media’. A very bad idea that was already, after much debate, rejected in the DSA.

Let’s see how this would work in practice. If any self-declared media writes that “The European Parliament partners with Bill Gates and George Soros to insert 5G surveillance chips into vaccines”, and this article is published on Twitter or Facebook, for instance, the platforms will first have to contact the media. They would then wait for 24 or 48 hours before possibly adding a fact-check, or not being able to do it all if some of the most recent amendments go through.

Those who have encountered such disinformation know that the first 24 hours are critical. As the old adage goes, “A lie gets halfway around the world before the truth puts on its boots”. Enabling such a back-and-forth exchange will only benefit the spread of disinformation, which can be further amplified by algorithms and become almost unstoppable.

Many journalists and fact-checkers have complained in the past that platforms were not doing enough to reduce the visibility of such viral disinformation. The Commission itself mentions that “Global online platforms act as gateways to media content, with business models that tend to disintermediate access to media services and amplify polarising content and disinformation.” Why on Earth would the EU then encourage further polarisation and disinformation by preventing content moderation?

This is not only a question of how such a carveout would benefit bogus media outlets. Some mainstream news sources with solid reputations and visibility can make mistakes, or are often the prime targets of those running disinformation campaigns. And quite successfully, as the recent example from the acclaimed investigations by Forbidden Stories has shown. In Hungary and Poland, state media that disseminate propaganda, in some cases even pro-Russian narratives, would be exempted from content moderation as well.

It might be counterintuitive, but the role of the media in disinformation and influence operations is huge. EU DisInfoLab sees it virtually in every single investigation that we do.

This loophole in the EMFA will make it hard if not impossible for the Commission to enforce the DSA against the biggest platforms. Potentially we would have to wait for the Court of Justice to solve the conflict between the two laws: the DSA mandating platforms to do content moderation and the EMFA legally preventing them from doing it. This would not be a good look for the EU legislature and until a decision of the Court comes, what will platforms do? They will likely stop moderating anything that comes close to being a ‘media’ just to avoid difficulties and costs.

We really don’t need any media exemption. There is no evidence to suggest that media content over-moderation is a systemic issue, and the impact assessment by the Commission does not suggest that either. With the DSA, Europe has just adopted horizontal content moderation rules where media freedom and plurality are at the core. Surely we should rather give a chance for the DSA to work, instead of saying it already failed before it is even applicable.

Media exemption will not help media freedom and plurality, on the contrary. It will enable industrial-scale disinformation production, reduce visibility for reputable media and the trust of society in it even more. Last year, Maria Ressa and Dmitry Muratov, 2021 Nobel Peace Prize laureates and journalists, called on the EU to ensure that no media exemption be included in any tech or media legislation in their 10-point plan to address our information crisis. It was supported by more than 100 civil society organisations.

MEPs and member states working on the EMFA must see the risks of disinformation and other harmful content that any carveout for media would create. The decision they are facing is clear: either flood Europe with harmful content or prioritise the safety of online users by strongly enforcing horizontal content moderation rules in the DSA.

Source: European Media Freedom Act: No to any media exemption

Google introduces PaLM 2 large language model

[…]

Building on this work, today we’re introducing PaLM 2, our next generation language model. PaLM 2 is a state-of-the-art language model with improved multilingual, reasoning and coding capabilities.

  • Multilinguality: PaLM 2 is more heavily trained on multilingual text, spanning more than 100 languages. This has significantly improved its ability to understand, generate and translate nuanced text — including idioms, poems and riddles — across a wide variety of languages, a hard problem to solve. PaLM 2 also passes advanced language proficiency exams at the “mastery” level.
  • Reasoning: PaLM 2’s wide-ranging dataset includes scientific papers and web pages that contain mathematical expressions. As a result, it demonstrates improved capabilities in logic, common sense reasoning, and mathematics.
  • Coding: PaLM 2 was pre-trained on a large quantity of publicly available source code datasets. This means that it excels at popular programming languages like Python and JavaScript, but can also generate specialized code in languages like Prolog, Fortran and Verilog.

A versatile family of models

Even as PaLM 2 is more capable, it’s also faster and more efficient than previous models — and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases.

[…]

PaLM 2 shows us the impact of highly capable models of various sizes and speeds — and that versatile AI models reap real benefits for everyone

[…]

We’re already at work on Gemini — our next model created from the ground up to be multimodal, highly efficient at tool and API integrations, and built to enable future innovations, like memory and planning.

[…]

Source: Google AI: What to know about the PaLM 2 large language model

YouTube begins warning: ‘Ad blockers are not allowed’

YouTube has begun showing a pop-up to some viewers warning them that “ad blockers are not allowed” on the video-sharing site.

The banner, which you can see below, appears if the Google subsidiary reckons you’re using some kind of content blocker that prevents videos from being interrupted by or book-ended with adverts.

According to YouTube, this is an experiment and only a small number of watchers will see the pop-up when browsing YouTube.com. The box tells users, “it looks like you may be using an ad blocker,” and reminds them that “ads allow YouTube to stay free for billions of users worldwide.”

It also urges you to “go ad-free with YouTube Premium, and creators can still get paid from your subscription.”

There are two options presented: a button to “allow YouTube ads,” and a button to sign up for YouTube Premium, an ad-free subscription that costs $11.99 a month at least here in the United States.

Those who have seen the pop-up say they can ignore those options, and close the pop-up and continue blocking ads as usual – though for how long, who’s to say? There is a link to click if you’re not using an blocker and want to report a false detection.

Screenshot of Youtube's ad blocker warning

What the YouTube ad block warning looks like … Hat tip: Reddit

“One ad before each video was fine, but they got greedy and started playing multiple unskippable 30-second ads, that’s when I went for ad block,” as one viewer put it. “There is zero chance I am ever deactivating it or paying for Premium now, that ship has sailed.”

[…]

Source: YouTube begins warning: ‘Ad blockers are not allowed’ • The Register

Scientists discover microbes in the Alps and Arctic that can digest plastic at low temperatures

Finding, cultivating, and bioengineering organisms that can digest plastic not only aids in the removal of pollution, but is now also big business. Several microorganisms that can do this have already been found, but when their enzymes that make this possible are applied at an industrial scale, they typically only work at temperatures above 30°C.

 

The heating required means that industrial applications remain costly to date, and aren’t carbon-neutral. But there is a possible solution to this problem: finding specialist cold-adapted microbes whose enzymes work at lower temperatures.

Scientists from the Swiss Federal Institute WSL knew where to look for such microorganisms: at high altitudes in the Alps of their country, or in the polar regions. Their findings are published in Frontiers in Microbiology.

“Here we show that novel microbial taxa obtained from the ‘plastisphere’ of alpine and arctic soils were able to break down at 15°C,” said first author Dr. Joel Rüthi, currently a guest scientist at WSL. “These organisms could help to reduce the costs and environmental burden of an enzymatic recycling process for .”

[…]

None of the strains were able to digest PE, even after 126 days of incubation on these plastics. But 19 (56%) of strains, including 11 fungi and eight bacteria, were able to digest PUR at 15°C, while 14 fungi and three bacteria were able to digest the plastic mixtures of PBAT and PLA. Nuclear Magnetic Resonance (NMR) and a fluorescence-based assay confirmed that these strains were able to chop up the PBAT and PLA polymers into smaller molecules.

[…]

The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.

[…]

The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.

Source: Scientists discover microbes in the Alps and Arctic that can digest plastic at low temperatures

OpenAI attempts to use Language models can explain neurons in language models, open source

[…]

One simple approach to interpretability research is to first understand what the individual components (neurons and attention heads) are doing. This has traditionally required humans to manually inspect neurons to figure out what features of the data they represent. This process doesn’t scale well: it’s hard to apply it to neural networks with tens or hundreds of billions of parameters. We propose an automated process that uses GPT-4 to produce and score natural language explanations of neuron behavior and apply it to neurons in another language model.

This work is part of the third pillar of our approach to alignment research: we want to automate the alignment research work itself. A promising aspect of this approach is that it scales with the pace of AI development. As future models become increasingly intelligent and helpful as assistants, we will find better explanations.

How it works

Our methodology consists of running 3 steps on every neuron.

[…]

Step 1: Generate explanation using GPT-4

Given a GPT-2 neuron, generate an explanation of its behavior by showing relevant text sequences and activations to GPT-4.

[…]

Step 2: Simulate using GPT-4

Simulate what a neuron that fired for the explanation would do, again using GPT-4

[…]

Step 3: Compare

Score the explanation based on how well the simulated activations match the real activations

[…]

What we found

Using our scoring methodology, we can start to measure how well our techniques work for different parts of the network and try to improve the technique for parts that are currently poorly explained. For example, our technique works poorly for larger models, possibly because later layers are harder to explain.

1e+51e+61e+71e+81e+90.020.030.040.050.060.070.080.090.100.110.12

Parameters in model being interpretedExplanation scoreScores by size of the model being interpreted

Although the vast majority of our explanations score poorly, we believe we can now use ML techniques to further improve our ability to produce explanations. For example, we found we were able to improve scores by:

  • Iterating on explanations. We can increase scores by asking GPT-4 to come up with possible counterexamples, then revising explanations in light of their activations.
  • Using larger models to give explanations. The average score goes up as the explainer model’s capabilities increase. However, even GPT-4 gives worse explanations than humans, suggesting room for improvement.
  • Changing the architecture of the explained model. Training models with different activation functions improved explanation scores.

We are open-sourcing our datasets and visualization tools for GPT-4-written explanations of all 307,200 neurons in GPT-2, as well as code for explanation and scoring using publicly available models on the OpenAI API. We hope the research community will develop new techniques for generating higher-scoring explanations and better tools for exploring GPT-2 using explanations.

We found over 1,000 neurons with explanations that scored at least 0.8, meaning that according to GPT-4 they account for most of the neuron’s top-activating behavior. Most of these well-explained neurons are not very interesting. However, we also found many interesting neurons that GPT-4 didn’t understand. We hope as explanations improve we may be able to rapidly uncover interesting qualitative understanding of model computations.

Source: Language models can explain neurons in language models

Coqui.ai Text to Speech library – create your own voice

🐸TTS is a library for advanced Text-to-Speech generation. It’s built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

Github page: https://github.com/coqui-ai/TTS

Ed Sheeran, Once Again, Demonstrates How Modern Copyright Is Destroying, Rather Than Helping Musicians

To hear the recording industry tell the story, copyright is the only thing protecting musicians from poverty and despair. Of course, that’s always been a myth. Copyright was designed to benefit the middlemen and gatekeepers, such as the record labels, over the artists themselves. That’s why the labels have a long history of never paying artists.

But over the last few years, Ed Sheeran has been highlighting the ways in which (beyond the “who gets paid” aspect of all of this) modern copyright is stifling rather than incentivizing music creation — directly in contrast to what we’re told it’s supposed to be doing.

We’ve talked about Sheeran before, as he’s been sued repeatedly by people claiming that his songs sound too much like other songs. Sheeran has always taken a much more open approach to copyright and music, noting that kids pirating his music is how he became famous in the first place. He’s also stood up for kids who had accounts shut down via copyright claims for playing his music.

But the lawsuits have been where he’s really highlighted the absurdity of modern copyright law. After winning one of the lawsuits a year ago, he put out a heartfelt statement on how ridiculous the whole thing was. A key part:

There’s only so many notes and very few chords used in pop music. Coincidence is bound to happen if 60,000 songs are being released every day on Spotify—that’s 22 million songs a year—and there’s only 12 notes that are available.

In the aftermath of this, Sheeran has said that he’s now filming all of his recent songwriting sessions, just in case he needs to provide evidence that he and his songwriting partners came up with a song on their own, which is depressing in its own right.

[…]

with this latest lawsuit it wasn’t actually a songwriter suing. It was a private equity firm that had purchased the rights from one of the songwriters (not Marvin Gaye) of Marvin Gaye’s hit song “Let’s Get it On.”

The claim over Thinking Out Loud was originally lodged in 2018, not by Gaye’s family but by investment banker David Pullman and a company called Structured Asset Sales, which has acquired a portion of the estate of Let’s Get It On co-writer Ed Townsend.

Thankfully, Sheeran won the case as the jury sided with him over Structured Asset Sales. Sheeran, once again, used the attention to highlight just how broken copyright is if these lawsuits are what’s coming out of it:

“I’m obviously very happy with the outcome of the case, and it looks like I’m not having to retire from my day job after all. But at the same time I’m unbelievably frustrated that baseless claims like this are able to go to court.

“We’ve spent the last eight years talking about two songs with dramatically different lyrics, melodies, and four chords which are also different, and used by songwriters every day all over the world. These chords are common building blocks used long before Let’s Get it On was written, and will be used to make music long after we’re all gone.

“They are in a songwriters’ alphabet, our toolkit, and should be there for all of us to use. No one owns them or the way that they are played, in the same way that no one owns the color blue.”

[…]

Source: Ed Sheeran, Once Again, Demonstrates How Modern Copyright Is Destroying, Rather Than Helping Musicians | Techdirt

Microsoft Tests Sticking Ads in Windows 11 Settings Menu as well as start menu

[…]

In addition to ads in the Start menu, the latest test build for Windows 11 includes notices for a Microsoft 365 trial and more in the Settings menu.

On Friday, Windows beta user and routine leaker Albacore shared several screenshots of the latest Insider Preview build 23451. These shots come from the ultra-early Canary test build, and show a new “Home” tab in Settings that includes a notice to “Try Microsoft 365.” This appears to link to a free trial of the company’s office apps suite. There’s also a notice for OneDrive and another to ask users to finish setting up a Microsoft account, advertising users can use the 365 apps and its cloud storage on desktop. Another notice in the Accounts tab also blasts users with a request to sign in to their Microsoft account.

These ads are very similar to other preview builds with so-called “badging” that shows up when users click on the Start menu. In that menu, the ads are more subtle and ask users to “Sign in to your Microsoft account” or advertise to users that they can “Use Microsoft 365 for free,” of course ignoring that users have to input their credit card information to access their free month of office apps.

[…]

Source: Microsoft Tests Sticking Ads in Windows 11 Settings Menu

Mercedes Locks Better EV Engine Performance Your Car Has and you paid for Behind Subscription

Last year BMW took ample heat for its plans to turn heated seats into a costly $18 per month subscription in numerous countries. As we noted at the time, BMW is already including the hardware in new cars and adjusting the sale price accordingly. So it’s effectively charging users a new, recurring fee to enable technology that already exists in the car and consumers already paid for.

The move portends a rather idiotic and expensive future for consumers that’s arriving faster than you’d think. Consumers unsurprisingly aren’t too keen on paying an added subscription for tech that already exists in the car and was already factored into the retail price, but the lure of consistent additional revenue they can nudge ever skyward pleases automakers and Wall Street alike.

Mercedes had already been toying with this idea in its traditional gas vehicles, but now says it’s considering making better EV engine performance an added subscription surcharge:

Mercedes-Benz electric vehicle owners in North America who want a little more power and speed can now buy 60 horsepower for just $60 a month or, on other models, 80 horsepower for $90 a month.

They won’t have to visit a Mercedes dealer to get the upgrade either, or even leave their own driveway. The added power, which will provide a nearly one second decrease in zero-to-60 acceleration, will be available through an over-the-air software patch.

Again, this is simply creating artificial restrictions and then charging consumers extra to bypass them. But this being America, there will indisputably be no shortage of dumb people with disposable income willing to burn money as part of a misguided craving for status.

If you don’t want to pay monthly, Mercedes will also let you pay a one time flat fee (usually several thousand dollars) to remove the artificial restrictions they’ve imposed on your engine. That’s, of course, creating additional upward pricing funnel efforts on top of the industry’s existing efforts to upsell you on a rotating crop of trims, tiers, and options you probably didn’t want.

It’s not really clear that regulators have any interest in cracking down on charging dumb people extra for something they already owned and paid for. After all, ripping off gullible consumers is effectively now considered little more than creative marketing by a notable segment of government “leaders” (see: regulatory apathy over misleading hidden fees in everything from hotels to cable TV).

[…]

Source: Mercedes Locks Better EV Engine Performance Behind Annoying Subscription Paywalls | Techdirt

So you pay for something which is in YOUR car but you can’t use it until you pay… more!

Yet another problem with recycling: It spews microplastics

[…]

an alarming new study has found that even when plastic makes it to a recycling center, it can still end up splintering into smaller bits that contaminate the air and water. This pilot study focused on a single new facility where plastics are sorted, shredded, and melted down into pellets. Along the way, the plastic is washed several times, sloughing off microplastic particles—fragments smaller than 5 millimeters—into the plant’s wastewater.

Because there were multiple washes, the researchers could sample the water at four separate points along the production line. (They are not disclosing the identity of the facility’s operator, who cooperated with their project.) This plant was actually in the process of installing filters that could snag particles larger than 50 microns (a micron is a millionth of a meter), so the team was able to calculate the microplastic concentrations in raw versus filtered discharge water—basically a before-and-after snapshot of how effective filtration is.

Their microplastics tally was astronomical. Even with filtering, they calculate that the total discharge from the different washes could produce up to 75 billion particles per cubic meter of wastewater. Depending on the recycling facility, that liquid would ultimately get flushed into city water systems or the environment. In other words, recyclers trying to solve the plastics crisis may in fact be accidentally exacerbating the microplastics crisis, which is coating every corner of the environment with synthetic particles.

[…]

The good news here is that filtration makes a difference: Without it, the researchers calculated that this single recycling facility could emit up to 6.5 million pounds of microplastic per year. Filtration got it down to an estimated 3 million pounds. “So it definitely was making a big impact when they installed the filtration,” says Brown. “We found particularly high removal efficiency of particles over 40 microns.”

[…]

Depending on the recycling facility, that wastewater might next flow to a sewer system and eventually to a treatment plant that is not equipped to filter out such small particles before pumping the water into the environment. But, says Enck, “some of these facilities might be discharging directly into groundwater. They’re not always connected to the public sewer system.” That means the plastics could end up in the water people use for drinking or irrigating crops.

The full extent of the problem isn’t yet clear, as this pilot study observed just one facility. But because it was brand-new, it was probably a best-case scenario, says Steve Allen, a microplastics researcher at the Ocean Frontiers Institute and coauthor of the new paper. “It is a state-of-the-art plant, so it doesn’t get any better,” he says. “If this is this bad, what are the others like?”

[…]

Still, researchers like Brown don’t think that we should abandon recycling. This new research shows that while filters can’t stop all the microplastics from leaving a recycling facility, they at least help substantially. “I really don’t want it to suggest to people that we shouldn’t recycle, and to give it a completely negative reputation,” she says. “What it really highlights is that we just really need to consider the impacts of the solutions.”

Scientists and anti-pollution groups agree that the ultimate solution isn’t relying on recycling or trying to pull trash out of the ocean, but massively cutting plastic production. “​​I just think this illustrates that plastics recycling in its traditional form has some pretty serious problems,” says Enck. “This is yet another reason to do everything humanly possible to avoid purchasing plastics.”

Source: Yet another problem with recycling: It spews microplastics | Ars Technica

Finnish newspaper hides Ukraine news reports for Russians in online game

A Finnish newspaper is circumventing Russian media restrictions by hiding news reports about the war in Ukraine in an online game popular among Russian gamers.

“While Helsingin Sanomat and other foreign independent media are blocked in Russia, online games have not been banned so far,” said Antero Mukka, the editor-in-chief of Helsingin Sanomat.

The newspaper was bypassing Russia’s censorship through the first-person shooter game Counter-Strike, where gamers battle against each other as terrorists and counter-terrorists in timed matches.

While the majority of matches are played on about a dozen official levels or maps released by the publisher Valve, players can also create custom maps that anyone can download and use.

The newspaper’s initiative was unveiled on World Press Freedom Day on Wednesday.

“To underline press freedom, [in the game] we have now built a Slavic city, called Voyna, meaning war in Russian,” Mukka said.

In the basement of one of the apartment buildings that make up the Soviet-inspired cityscape, Helsingin Sanomat hid a room where players can find Russian-language reporting by the newspaper’s war correspondents in Ukraine.

“In the room, you will find our documentation of what the reality of the war in Ukraine is,” Mukka said.

The walls of the digital room, lit up by red lights, are plastered with news articles and pictures reporting on events such as the massacres in the Ukrainian towns of Bucha and Irpin.

On one of the walls, players can find a map of Ukraine that details reported attacks on the civilian population, while a Russian-language recording reading Helsingin Sanomat articles aloud plays in the background.

This was “information that is not available from Russian state propaganda sources”, Mukka said.

Since its release on Monday, the map has been downloaded more than 2,000 times, although the paper cannot currently track downloads geographically.

“This definitely underlines the fact that every attempt to obstruct the flow of information and blind the eyes of the public is doomed to fail in today’s world,” Mukka said.

He said an estimated 4 million Russians played the game. “These people may often be in the mobilisation or drafting age.”

“I think Russians also have the right to know independent and fact-based information, so that they can also make their own life decisions,” he added.

Source: Finnish newspaper hides Ukraine news reports for Russians in online game | Censorship | The Guardian

Microsoft is forcing Outlook and Teams to open links in Edge, ignore OS default browser settings

Microsoft Edge is a good browser but for some reason Microsoft keeps trying to shove it down everyone’s throat and make it more difficult to use rivals like Chrome or Firefox. Microsoft has now started notifying IT admins that it will force Outlook and Teams to ignore the default web browser on Windows and open links in Microsoft Edge instead.

Reddit users have posted messages from the Microsoft 365 admin center that reveal how Microsoft is going to roll out this change. “Web links from Azure Active Directory (AAD) accounts and Microsoft (MSA) accounts in the Outlook for Windows app will open in Microsoft Edge in a single view showing the opened link side-by-side with the email it came from,” reads a message to IT admins from Microsoft.

While this won’t affect the default browser setting in Windows, it’s yet another part of Microsoft 365 and Windows that totally ignores your default browser choice for links. Microsoft already does this with the Widgets system in Windows 11 and even the search experience, where you’ll be forced into Edge if you click a link even if you have another browser set as default.

IT admins aren’t happy with many complaining in various threads on Reddit, spotted by Neowin. If Outlook wasn’t enough, Microsoft says “a similar experience will arrive in Teams” soon with web links from chats opening in Microsoft Edge side-by-side with Teams chats.

[…]

The notifications to IT admins come just weeks after Microsoft promised significant changes to the way Windows manages which apps open certain files or links by default. At the time Microsoft said it believed “we have a responsibility to ensure user choices are respected” and that it’s “important that we lead by example with our own first party Microsoft products.” Forcing people into Microsoft Edge and ignoring default browsers is anything but respecting user choice, and it’s gross that Microsoft continues to abuse this.

Microsoft tested a similar change to the default Windows 10 Mail app in 2018, in an attempt to force people into Edge for email links. That never came to pass, thanks to a backlash from Windows 10 testers. A similar change in 2020 saw Microsoft try and force Chrome’s default search engine to Bing using the Office 365 installer, and IT admins weren’t happy then either.

[…]

Source: Microsoft is forcing Outlook and Teams to open links in Edge, and IT admins are angry – The Verge

Researchers See Through a Mouse’s Eyes by Decoding Brain Signals

[…] a team of researchers from the École Polytechnique Fédérale de Lausanne (EPFL) successfully developed a machine-learning algorithm that can decode a mouse’s brain signals and reproduce images of what it’s seeing.

[…]

The mice were shown a black and white movie clip from the 1960s of a man running to a car and then opening its trunk. While the mice were watching the clip, scientists measured and recorded their brain activity using two approaches: electrode probes inserted into their brains’ visual cortex region, as well as optical probes for mice that had been genetically engineered so that the neurons in their brains glow green when firing and transmitting information. That data was then used to train a new machine learning algorithm called CEBRA.

See through the eyes of a mouse by decoding brain signals

When then applied to the captured brain signals of a new mouse watching the black and white movie clip for the first time, the CEBRA algorithm was able to correctly identify specific frames the mouse was seeing as it watched. Because CEBRA was also trained on that clip, it was also able to generate matching frames that were a near perfect match, but with the occasional telltale distortions of AI-generated imagery.

[…]

This research involved a very specific (and short) piece of footage that the machine learning algorithm was also familiar with. In its current form, CEBRA also really only takes into account the activity from about 1% of the neurons in a mouse’s brain, so there’s definitely room for its accuracy and capabilities to improve. The research also isn’t just about decoding what a brain sees. A study, published in the journal, Nature, shows that CEBRA can also be used to “predict the movements of the arm in primates,” and “reconstruct the positions of rats as they freely run around an arena.” It’s a potentially far more accurate way to peer into the brain, and understand how all the neural activity correlates to what is being processed.

Source: Researchers See Through a Mouse’s Eyes by Decoding Brain Signals