Supreme Court Limits EPA’s Authority Under the Clean Water Act – yay, trash the USA!

The U.S. Supreme Court Court on Thursday significantly curtailed the power of the Environmental Protection Agency to regulate the nation’s wetlands and waterways. It was the court’s second decision in a year limiting the ability of the agency to enact anti-pollution regulations and combat climate change. The challenge to the regulations was brought by Michael and Chantell Sackett, who bought property to build their dream house about 500 feet away from Idaho’s Scenic Priest Lake, a 19-mile stretch of clear water that is fed by mountain streams and bordered by state and national parkland. Three days after the Sacketts started excavating their property, the EPA stopped work on the project because the couple had failed to get a permit for disturbing the wetlands on their land. Now a conservative Supreme Court majority has used the Sackett’s case to roll back longstanding rules adopted to carry out the 51-year-old Clean Water Act. While the nine justices agreed that the Sacketts should prevail, they divided 5-to-4 as to how far to go in limiting the EPA’s authority.

Writing for the court majority (PDF), Justice Samuel Alito said that the navigable waters of the United States regulated by the EPA under the statute do not include many previously regulated wetlands. Rather, he said, the CWA extends to only streams, oceans, rivers and lakes, and those wetlands with a “continuous surface connection to those bodies.” Justice Brett Kavanaugh, joined by the court’s three liberal members, disputed Alito’s reading of the statute, noting that since 1977 when the CWA was amended to include adjacent wetlands, eight consecutive presidential administrations, Republican and Democratic, have interpreted the law to cover wetlands that the court has now excluded. Kavanaugh said that by narrowing the act to cover only adjoining wetlands, the court’s new test will have quote “significant repercussions for water quality and flood control throughout the United States.” In addition to joining Kavanaugh’s opinion, the court’s liberals, signed on to a separate opinion by Justice Elena Kagan. Pointing to the air and water pollution cases, she accused the majority of appointing itself instead of Congress as the national policymaker on the environment. President Biden, in a statement, called the decision “disappointing.” It “upends the legal framework that has protected America’s waters for decades,” he said. “It also defies the science that confirms the critical role of wetlands in safeguarding our nation’s streams, rivers, and lakes from chemicals and pollutants that harm the health and wellbeing of children, families, and communities.”

“I don’t think its an overstatement to say its catastrophic for the Clean Water act,” said Jim Murphy of the National Wildlife Federation. Wetlands play an “enormous role in protecting the nation’s water,” he said. “They’re really the kidneys of water systems and they’re also the sponges. They absorb a lot of water on the landscape. So they’re very important water features and they’re very important to the quality of the water that we drink, swim, fish, boat and recreate in.”

Source: Supreme Court Limits EPA’s Authority Under the Clean Water Act – Slashdot

Virgin Galactic flies final test before opening for business

At 0915 Mountain Time (1515 UTC), the VMS Eve mothership took off from New Mexico’s Spaceport America, carrying its spacecraft to an altitude of 44,500 feet (over 13.5km). Pilots on VSS Unity, which rides along with VMS Eve, then fired its rockets to take its six passengers even higher – to 54.2 miles (over 87.2km) at nearly three times the speed of sound.

After a few minutes of weightlessness, during which the crew could gawp at Earth’s totally not flat surface from suborbital space, the craft descended and landed back safely at 1037 MT (1647 UTC).

The entire crew consisted of Virgin Galactic employees. Pilot Nicola Pecile and commander Jameel Janjua flew VMS Eve, whilst Unity’s crew was another pilot and commander pair – CJ Sturckow and Mike Masucci – plus astronaut instructors Beth Moses and Luke Mays, and mission specialists Christopher Huie and Jamila Gilbert.

CEO Michael Colglazier said the latest flight – the 25th test conducted by Richard Branson’s space tourism venture – was the last before Virgin Galactic opens for business next month.

[…]

Tickets for a seat on the VSS Unity spacecraft aren’t cheap. Space fans hoping to experience brief weightlessness and a taste of space will have to fill out an application form, and fork over $10,000 upfront just to get Virgin Galactic to consider them for a ticket. The lucky few should expect to pay a total of $450,000 for a ride aboard the VSS Unity.

[…]

Source: Virgin Galactic flies final test before opening for business • The Register

New superbug-killing antibiotic discovered using AI

Scientists have used artificial intelligence (AI) to discover a new antibiotic that can kill a deadly species of superbug.

The AI helped narrow down thousands of potential chemicals to a handful that could be tested in the laboratory.

The result was a potent, experimental antibiotic called abaucin, which will need further tests before being used.

The researchers in Canada and the US say AI has the power to massively accelerate the discovery of new drugs.

It is the latest example of how the tools of artificial intelligence can be a revolutionary force in science and medicine.

[…]

To find a new antibiotic, the researchers first had to train the AI. They took thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter baumannii to see which could slow it down or kill it.

This information was fed into the AI so it could learn the chemical features of drugs that could attack the problematic bacterium.

The AI was then unleashed on a list of 6,680 compounds whose effectiveness was unknown. The results – published in Nature Chemical Biology – showed it took the AI an hour and a half to produce a shortlist.

The researchers tested 240 in the laboratory, and found nine potential antibiotics. One of them was the incredibly potent antibiotic abaucin.

Laboratory experiments showed it could treat infected wounds in mice and was able to kill A. baumannii samples from patients.

However, Dr Stokes told me: “This is when the work starts.”

The next step is to perfect the drug in the laboratory and then perform clinical trials. He expects the first AI antibiotics could take until 2030 until they are available to be prescribed.

Curiously, this experimental antibiotic had no effect on other species of bacteria, and works only on A. baumannii.

Many antibiotics kill bacteria indiscriminately. The researchers believe the precision of abaucin will make it harder for drug-resistance to emerge, and could lead to fewer side-effects.

[…]

Source: New superbug-killing antibiotic discovered using AI – BBC News

Google bans Downloader app after TV firms complain it can load a pirate website – Firefox, Opera, IE, Chrome, Safari: look out!

The Google Play Store suspended an app that combines a web browser with a file manager after a Digital Millennium Copyright Act (DMCA) complaint pointed out that the app is capable of loading a piracy website—even though that same pirate website can be loaded on any standard browser, including Google Chrome.

The free app, which is designed for Android TV devices and is called Downloader, had been installed from Google Play over 5 million times before its suspension on Friday, an Internet Archive capture shows. The suspension notice that Google sent to Downloader app developer Elias Saba cites a complaint from several Israeli TV companies that said the app “allows users to view the infamous copyright infringing website known as SDAROT.”

Saba provided us with a copy of the suspension notice.

“You can see in the DMCA description portion that the only reason given is the app being able to load a website,” Saba told Ars. “My app is a utility app that combines a basic file manager and a basic web browser. There is no way to view content in the app other than to use the web browser to navigate to a website. The app also doesn’t present or direct users to any website, other than my blog at www.aftvnews.com, which loads as the default homepage in the web browser.”

Saba also detailed his frustrations with the takedown in a blog post and a series of tweets. “Any rational person would agree that you can’t possibly blame a web browser for the pirated content that exists on the Internet, but that is exactly what has happened to my app,” he wrote on his blog.

Downloader is still available on the Amazon app store for devices such as Fire TVs, or from the Downloader app’s website as an APK file.

It’s a “standard web browser,” developer says

Before being pulled from Google Play, the app’s description said that Downloader “allows Android TV owners to easily download files from the Internet onto their device. You can enter a URL which directly points to a file, or you can sideload the web browser plugin to download files from websites.”

“If loading a website with infringing content in a standard web browser is enough to violate DMCA, then every browser in the Google Play Store including @googlechrome should also be removed. It’s a ridiculous claim and an abuse of the DMCA,” Saba wrote on Twitter.

[…]

Source: Google bans Downloader app after TV firms complain it can load a pirate website | Ars Technica

Brute-force attack bypasses Android biometric fingerprint defense

Chinese researchers say they successfully bypassed fingerprint authentication safeguards on smartphones by staging a brute force attack.

Researchers at Zhejiang University and Tencent Labs capitalized on vulnerabilities of modern smartphone fingerprint scanners to stage their break-in operation, which they named BrutePrint. Their findings are published on the arXiv preprint server.

A flaw in the Match-After-Lock feature, which is supposed to bar authentication activity once a device is in lockout mode, was overridden to allow a researcher to continue submitting an unlimited number of fingerprint samples.

Inadequate protection of biometric data stored on the Serial Peripheral Interface of fingerprint sensors enables attackers to steal fingerprint images. Samples also can be easily obtained from academic datasets or from biometric data leaks.

[…]

All Android devices and one HarmonyOS (Huawei) device tested by researchers had at least one flaw allowing for break-ins. Because of tougher defense mechanisms in IOS devices, specifically Apple iPhone SE and iPhone 7, those devices were able to withstand brute-force entry attempts. Researchers noted that iPhone devices were susceptible to CAMF vulnerabilities, but not to the extent that successful entry could be achieved.

To launch a successful break-in, an attacker requires physical access to a targeted phone for several hours, a easily obtainable for $15, and access to fingerprint images.

Fingerprint databases are available online through academic resources, but hackers more likely will access massive volumes of images obtained through data breaches.

[…]

More information: Yu Chen et al, BrutePrint: Expose Smartphone Fingerprint Authentication to Brute-force Attack, arXiv (2023). DOI: 10.48550/arxiv.2305.10791

Source: Brute-force test attack bypasses Android biometric defense

A Paralyzed Man Can Walk Naturally Again With ML Brain and Spine Implants

Gert-Jan Oskam was living in China in 2011 when he was in a motorcycle accident that left him paralyzed from the hips down. Now, with a combination of devices, scientists have given him control over his lower body again. “For 12 years I’ve been trying to get back my feet,” Mr. Oskam said in a press briefing on Tuesday. “Now I have learned how to walk normal, natural.” In a study published on Wednesday in the journal Nature, researchers in Switzerland described implants that provided a “digital bridge” between Mr. Oskam’s brain and his spinal cord, bypassing injured sections. The discovery allowed Mr. Oskam, 40, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and has actually showed signs of neurological recovery, walking with crutches even when the implant was switched off. “We’ve captured the thoughts of Gert-Jan, and translated these thoughts into a stimulation of the spinal cord to re-establish voluntary movement,” Gregoire Courtine, a spinal cord specialist at the Swiss Federal Institute of Technology, Lausanne, who helped lead the research, said at the press briefing.

In the new study, the brain-spine interface, as the researchers called it, took advantage of an artificial intelligence thought decoder to read Mr. Oskam’s intentions — detectable as electrical signals in his brain — and match them to muscle movements. The etiology of natural movement, from thought to intention to action, was preserved. The only addition, as Dr. Courtine described it, was the digital bridge spanning the injured parts of the spine. […] To achieve this result, the researchers first implanted electrodes in Mr. Oskam’s skull and spine. The team then used a machine-learning program to observe which parts of the brain lit up as he tried to move different parts of his body. This thought decoder was able to match the activity of certain electrodes with particular intentions: One configuration lit up whenever Mr. Oskam tried to move his ankles, another when he tried to move his hips.

Then the researchers used another algorithm to connect the brain implant to the spinal implant, which was set to send electrical signals to different parts of his body, sparking movement. The algorithm was able to account for slight variations in the direction and speed of each muscle contraction and relaxation. And, because the signals between the brain and spine were sent every 300 milliseconds, Mr. Oskam could quickly adjust his strategy based on what was working and what wasn’t. Within the first treatment session he could twist his hip muscles. Over the next few months, the researchers fine-tuned the brain-spine interface to better fit basic actions like walking and standing. Mr. Oskam gained a somewhat healthy-looking gait and was able to traverse steps and ramps with relative ease, even after months without treatment. Moreover, after a year in treatment, he began noticing clear improvements in his movement without the aid of the brain-spine interface. The researchers documented these improvements in weight-bearing, balancing and walking tests. Now, Mr. Oskam can walk in a limited way around his house, get in and out of a car and stand at a bar for a drink. For the first time, he said, he feels like he is the one in control.

Source: A Paralyzed Man Can Walk Naturally Again With Brain and Spine Implants – Slashdot

SkyFi lets you order up fresh satellite imagery in real time with a click

Commercial Earth-observation companies collect an unprecedented volume of images and data every single day, but purchasing even a single satellite image can be cumbersome and time-intensive. SkyFi, a two-year-old startup, is looking to change that with an app and API that makes ordering a satellite image as easy as a click of a few buttons on a smartphone or computer.

SkyFi doesn’t build or operate satellites; instead, it partners with over a dozen companies to deliver various kinds of satellite images — including optical, synthetic aperture radar (SAR), and hyperspectral — directly to the customer via a web and mobile app. A SkyFi user can task a satellite to capture a specific image or choose from a library of previously captured images. Some of SkyFi’s partners include public companies like Satellogic, as well as newer startups like Umbra and Pixxel.

[…]

SkyFi’s mission has resonated with investors. The company closed a $7 million seed round led by Balerion Space Ventures, with contributions from existing investors J2 Ventures and Uber alumna’s VC firm Moving Capital. Bill Perkins also participated. SkyFi has now raised over $17 million to date.

The startup is targeting three types of customers: individual consumers; large enterprise customers, from verticals spanning agriculture, mining, finance, insurance and more; and U.S. government and defense customers. SkyFi’s solution is appealing even these latter customers, who may have plenty of experience working with satellite companies already and could afford the high costs in the traditional marketplace.

[…]

Looking ahead, the Austin, Texas–based startup is planning on integrating insight and analytics capabilities into the SkyFi app. This feature will be especially useful for customers interested in hyperspectral or SAR images. The company also plans to do more feature updates as it integrates more providers — from satellites, to stratospheric balloons, to drones — to the platform.

“I think of SkyFi as the Netflix of the geospatial world, where I think of Umbra, Satellogic and Maxar as the movie studios of the world,” Fischer said. “I just want them to produce great content and put it on the platform.”

Source: SkyFi lets you order up fresh satellite imagery in real time with a click | TechCrunch

Samsung Display demos long rollable and a health-sensing OLED

The Rollable Flex is an interesting new flexible screen from Samsung Display that can be unrolled from just 49mm to 254.4mm, over five times its length. The display is being shown off at the annual Display Week trade show in Los Angeles alongside another Samsung panel that the company says offers fingerprint and blood pressure sensing in the OLED panel without the need for a separate module.

Aside from its maximum and minimum lengths, details on the Rollable Flex in Samsung Display’s press release are relatively slim, and it’s unclear what its overall size or resolution might be. The company says the panel unrolls on an “O-shaped axis like a scroll,” allowing it to “turn a difficult-to-carry large-sized display into a portable form factor.”

[…]

Source: Samsung Display demos long rollable and a health-sensing OLED – The Verge

Samsung’s new Sensor OLED display can read fingerprints anywhere on the screen

Samsung has unveiled a new display technology that could lead to new biometric and health-related capabilities in future phones and tablets. The tech giant has debuted what it calls the Sensor OLED Display that can read your fingerprints regardless of what part of the screen you touch at this year’s SID Display Week in LA. While most smartphones now have fingerprint readers on the screen, their sensors are attached under the panel as a separate module that only works within a small designated area. For Sensor OLED, Samsung said it embedded the fingerprint sensor into the panel itself.

Since the display technology can read fingerprints anywhere on the screen, it can also be used to monitor your heart rate and blood pressure. The company said it can even return more accurate readings than available wearables can. To measure your blood pressure, you’d need to place two fingers on the screen. OLED light is apparently reflected differently depending on your blood vessels’ contraction and relaxation. After that information is returned to the panel, the sensor converts it into health metrics.

Samsung explained in its press release: “To accurately measure a person’s blood pressure, it is necessary to measure the blood pressure of both arms. The Sensor OLED display can simultaneously sense the fingers of both hands, providing more accurate health information than existing wearable devices.” The company has yet to announce if it’s planning to use this new technology on devices it’s releasing in the future, but the exhibit at SID Display already shows it being able to read blood pressure and heart rate.

[…]

Source: Samsung’s new Sensor OLED display can read fingerprints anywhere on the screen

Meta’s open-source speech AI recognizes over 4,000 spoken languages | Engadget

Meta has created an AI language model that (in a refreshing change of pace) isn’t a ChatGPT clone. The company’s Massively Multilingual Speech (MMS) project can recognize over 4,000 spoken languages and produce speech (text-to-speech) in over 1,100. Like most of its other publicly announced AI projects, Meta is open-sourcing MMS today to help preserve language diversity and encourage researchers to build on its foundation. “Today, we are publicly sharing our models and code so that others in the research community can build upon our work,” the company wrote.

[…]

Speech recognition and text-to-speech models typically require training on thousands of hours of audio with accompanying transcription labels. (Labels are crucial to machine learning, allowing the algorithms to correctly categorize and “understand” the data.) But for languages that aren’t widely used in industrialized nations — many of which are in danger of disappearing in the coming decades — “this data simply does not exist,” as Meta puts it.

Meta used an unconventional approach to collecting audio data: tapping into audio recordings of translated religious texts. “We turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research,” the company said. “These translations have publicly available audio recordings of people reading these texts in different languages.” Incorporating the unlabeled recordings of the Bible and similar texts, Meta’s researchers increased the model’s available languages to over 4,000.

[…]

“While the content of the audio recordings is religious, our analysis shows that this does not bias the model to produce more religious language,” Meta wrote. “We believe this is because we use a connectionist temporal classification (CTC) approach, which is far more constrained compared with large language models (LLMs) or sequence-to-sequence models for speech recognition.” Furthermore, despite most of the religious recordings being read by male speakers, that didn’t introduce a male bias either — performing equally well in female and male voices.

[…]

After training an alignment model to make the data more usable, Meta used wav2vec 2.0, the company’s “self-supervised speech representation learning” model, which can train on unlabeled data. Combining unconventional data sources and a self-supervised speech model led to impressive outcomes. “Our results show that the Massively Multilingual Speech models perform well compared with existing models and cover 10 times as many languages.” Specifically, Meta compared MMS to OpenAI’s Whisper, and it exceeded expectations. “We found that models trained on the Massively Multilingual Speech data achieve half the word error rate, but Massively Multilingual Speech covers 11 times more languages.”

Meta cautions that its new models aren’t perfect. “For example, there is some risk that the speech-to-text model may mistranscribe select words or phrases,” the company wrote. “Depending on the output, this could result in offensive and/or inaccurate language. We continue to believe that collaboration across the AI community is critical to the responsible development of AI technologies.”

[…]

Source: Meta’s open-source speech AI recognizes over 4,000 spoken languages | Engadget

Establishing a wildflower meadow bolstered biodiversity and reduced greenhouse gas emissions, study finds

A new study examining the effects of planting a wildflower meadow in the historic grounds of King’s College, Cambridge, has demonstrated its benefits to local biodiversity and climate change mitigation.

 

The study, led by King’s Research Fellow Dr. Cicely Marshall, found that establishing the meadow had made a considerable impact to the wildlife value of the land, while reducing the associated with its upkeep.

Marshall and her colleagues, among them three King’s undergraduate students, conducted biodiversity surveys over three years to compare the , abundance and composition supported by the meadow and adjacent .

They found that, in spite of its small size, the wildflower meadow supported three times as many species of plants, spiders and bugs, including 14 species with conservation designations.

Terrestrial invertebrate biomass was found to be 25 times higher in the meadow, with bat activity over the meadow also being three times higher than over the remaining lawn.

The study is published May 23 in the journal Ecological Solutions and Evidence.

As well as looking at the benefits to biodiversity, Marshall and her colleagues modeled the impact of the meadow on efforts, by assessing the changes in reflectivity, soil carbon sequestration, and emissions associated with its maintenance.

The reduced maintenance and fertilization associated with the meadow was found to save an estimated 1.36 tons CO2-e per hectare per year when compared with the grass lawn.

Surface reflectance increased by more than 25%, contributing to a reduced urban heat island effect, with the meadow more likely to tolerate an intensified drought regime.

[…]

Source: Establishing a wildflower meadow bolstered biodiversity and reduced greenhouse gas emissions, study finds

Brain waves can tell us how much pain someone is in

Brain signals can be used to detect how much pain a person is experiencing, which could overhaul how we treat certain chronic pain conditions, a new study has suggested.

The research, published in Nature Neuroscience today, is the first time a human’s chronic-pain-related brain signals have been recorded. It could aid the development of personalized therapies for the most severe forms of pain.

[…]

Researchers from the University of California, San Francisco, implanted electrodes in the brains of four people with chronic pain. The patients then answered surveys about the severity of their pain multiple times a day over a period of three to six months. After they finished filling out each survey, they sat quietly for 30 seconds so the electrodes could record their brain activity. This helped the researchers identify biomarkers of chronic pain in the brain signal patterns, which were as unique to the individual as a fingerprint.

Next, the researchers used machine learning to model the results of the surveys. They found they could successfully predict how the patients would score the severity of their pain by examining their brain activity, says Prasad Shirvalkar, one of the study’s authors.

“The hope is that now that we know where these signals live, and now that we know what type of signals to look for, we could actually try to track them noninvasively,” he says. “As we recruit more patients, or better characterize how these signals vary between people, maybe we can use it for diagnosis.”

The researchers also found they were able to distinguish a patient’s chronic pain from acute pain deliberately inflicted using a thermal probe. The chronic-pain signals came from a different part of the brain, suggesting that it’s not just a prolonged version of acute pain, but something else entirely.

Source: Brain waves can tell us how much pain someone is in | MIT Technology Review

Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR – 10 years and 3 court cases later

[…]

Today the European Data Protection Board (EDPB) announced that Meta has been fined €1.2 billion (close to $1.3 billion) — which the Board confirmed is the largest fine ever issued under the bloc’s General Data Protection Regulation (GDPR). (The prior record goes to Amazon which was stung for $887 million for misusing customers data for ad targeting back in 2021.)

Meta’s sanction is for breaching conditions set out in the pan-EU regulation governing transfers of personal data to so-called third countries (in this case the US) without ensuring adequate protections for people’s information.

European judges have previously found U.S. surveillance practices to conflict with EU privacy rights.

[…]

The decision emerging out of the Irish DPC flows from a complaint made against Facebook’s Irish subsidiary almost a decade ago, by privacy campaigner Max Schrems — who has been a vocal critic of Meta’s lead data protection regulator in the EU, accusing the Irish privacy regulator of taking an intentionally long and winding path in order to frustrate effective enforcement of the bloc’s rulebook.

On the substance of his complaint, Schrems argues that the only sure-fire way to fix the EU-U.S. data flows doom loop is for the U.S. to grasp the nettle and reform its surveillance practices.

Responding to today’s order in a statement (via his privacy rights not-for-profit, noyb), he said: “We are happy to see this decision after ten years of litigation. The fine could have been much higher, given that the maximum fine is more than 4 billion and Meta has knowingly broken the law to make a profit for ten years. Unless US surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

[… ]

This suggests the Irish regulator is routinely under-enforcing the GDPR on the most powerful digital platforms and doing so in a way that creates additional problems for efficient functioning of the regulation since it strings out the enforcement process. (In the Facebook data flows case, for example, objections were raised to the DPC’s draft decision last August — so it’s taken some nine months to get from that draft to a final decision and suspension order now.) And, well, if you string enforcement out for long enough you may allow enough time for the goalposts to be moved politically that enforcement never actually needs to happen. Which, while demonstrably convenient for data-mining tech giants like Meta, does make a mockery of citizens’ fundamental rights.

As noted above, with today’s decision, the DPC is actually implementing a binding decision taken by the EDPB last month in order to settle ongoing disagreement over Ireland’s draft decision — so much of the substance of what’s being ordered on Meta today comes, not from Dublin, but from the bloc’s supervisor body for privacy regulators.

[…]

n further public remarks today, Schrems once again hit out at the DPC’s approach — accusing the regulator of essentially working to thwart enforcement of the GDPR. “It took us ten years of litigation against the Irish DPC to get to this result. We had to bring three procedures against the DPC and risked millions of procedural costs. The Irish regulator has done everything to avoid this decision but was consistently overturned by the European Courts and institutions. It is kind of absurd that the record fine will go to Ireland — the EU Member State that did everything to ensure that this fine is not issued,” he said.

[…]

Earlier reports have suggested the European Commission could adopt the new EU-U.S. data deal in July, although it has declined to provide a date for this since it says multiple stakeholders are involved in the process.

Such a timeline would mean Meta gets a new escape hatch to avoid having to suspend Facebook’s service in the EU; and can keep relying on this high level mechanism so long as it is stands.

If that’s how the next section of this torturous complaint saga plays out it will mean that a case against Facebook’s illegal data transfers which dates back almost ten years at this point will, once again, be left twisting in the wind — raising questions about whether it’s really possible for Europeans to exercise legal rights set out in the GDPR? (And, indeed, whether deep-pocketed tech giants, whose ranks are packed with well-paid lawyers and lobbyists, can be regulated at all?)

[…]

Analysis on five years of the GDPR, put out earlier this month by the Irish Council for Civil Liberties (ICCL), dubs the enforcement situation a “crisis” — warning: “Europe’s failure to enforce the GDPR exposes everyone to acute hazard in the digital age and fingering Ireland’s DPA as a leading cause of enforcement failure against Big Tech.”

And the ICCL points the finger of blame squarely at Ireland’s DPC.

“Ireland continues to be the bottleneck of enforcement: It delivers few draft decisions on major cross-border cases, and when it does eventually do so other European enforcers routinely vote by majority to force it to take tougher enforcement action,” the report argues — before pointing out that: “Uniquely, 75% of Ireland’s GDPR investigation decisions in major EU cases were overruled by majority vote of its European counterparts at the EDPB, who demand tougher enforcement action.”

The ICCL also highlights that nearly all (87%) of cross-border GDPR complaints to Ireland repeatedly involve the same handful of Big Tech companies: Google, Meta (Facebook, Instagram, WhatsApp), Apple, TikTok, and Microsoft. But says many complaints against these tech giants never even get a full investigation — thereby depriving complaints of the ability to exercise their rights.

The analysis points out that the Irish DPC chooses “amicable resolution” to conclude the vast majority (83%) of cross-border complaints it receives (citing the oversight body’s own statistics) — further noting: “Using amicable resolution for repeat offenders, or for matters likely to impact many people, contravenes European Data Protection Board guidelines.”

[…]

The reality is a patchwork of problems frustrate effective enforcement across the bloc as you might expect with decentralized oversight structure which factors in linguistic and culture differences across 27 Member States and varying opinions on how best to approach oversight atop big (and very personal) concepts like privacy which may mean very different things to different people.

Schrems’ privacy rights not-for-profit, noyb, has been collating information on this patchwork of GDPR enforcement issues — which include things like under-resourcing of smaller agencies and a general lack of in-house expertise to deal with digital issues; transparency problems and information blackholes for complainants; cooperation issues and legal barriers frustrating cross-border complaints; and all sorts of ‘creative’ interpretations of complaints “handling” — meaning nothing being done about a complaint still remains a common outcome — to name just a few of the issues it’s encountered.

[…]

Source: Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR | TechCrunch

The article contains the history of the court cases Schrems had to enter to get the Ireland and the EU to do anything about data sharing problems – it’s an interesting read.

HP Can’t Fix Bricked Printers After Faulty Firmware Update which bricked non HP-ink cartridges

Last week the Telegraph reported that a recent firmware update to HP printers “prevents customers from using any cartridges other than those fitted with an HP chip, which are often more expensive. If the customer tries to use a non-HP ink cartridge, the printer will refuse to print.”

Some HP “Officejet” printers can disable this “dynamic security” through a firmware update, PC World reported earlier this week. But HP still defends the feature, arguing it’s “to protect HP’s innovations and intellectual property, maintain the integrity of our printing systems, ensure the best customer printing experience, and protect customers from counterfeit and third-party ink cartridges that do not contain an original HP security chip and infringe HP’s intellectual property.”

Meanwhile, Engadget now reports that “a software update Hewlett-Packard released earlier this month for its OfficeJet printers is causing some of those devices to become unusable.” After downloading the faulty software, the built-in touchscreen on an affected printer will display a blue screen with the error code 83C0000B. Unfortunately, there appears to be no way for someone to fix a printer broken in this way on their own, partly because factory resetting an HP OfficeJet requires interacting with the printer’s touchscreen display. For the moment, HP customers report the only solution to the problem is to send a broken printer back to the company for service.
BleepingComputer says the firmware update “has been bricking HP Office Jet printers worldwide since it was released earlier this month…” “Our teams are working diligently to address the blue screen error affecting a limited number of HP OfficeJet Pro 9020e printers,” HP told BleepingComputer… Since the issues surfaced, multiple threads have been started by people from the U.S., the U.K., Germany, the Netherlands, Australia, Poland, New Zealand, and France who had their printers bricked, some with more than a dozen pages of reports.

“HP has no solution at this time. Hidden service menu is not showing, and the printer is not booting anymore. Only a blue screen,” one customer said.

“I talked to HP Customer Service and they told me they don’t have a solution to fix this firmware issue, at the moment,” another added.

Source: HP Rushes to Fix Bricked Printers After Faulty Firmware Update – Slashdot

How a 35-year-old weed smoker behind 10 million scam calls made his fortune

Millions of people get phone calls from scammers and wonder who is at the other end.

Now we know: rather than someone in a call centre far away, a “bright young man” living in a lush flat in London has been unmasked as the mastermind behind so many of these calls.

Tejay Fletcher’s trial exposed how criminals with a simple website bypassed police, phone operators and banks to facilitate “fraud on an industrial scale”, scamming victims out of £100m of their hard earned cash.

Fletcher, 35, who ran the website iSpoof.cc, was jailed for 13 years and four months earlier this week following his arrest in 2019 in what is the biggest anti-fraud operation mounted in the UK.

The website allowed criminals to disguise their phone numbers in a process known as “spoofing” and trick unsuspecting people to believe they were being called by their bank or other institutions.

[…]

In 2020, he co-founded iSpoof.cc, which he built into what he called “the most sophisticated client spoofing platform available”, allowing scammers to change the number or identity displayed when they made calls so they appeared to be calling from a trusted organisation, often a bank or a bank’s fraud department.

[…]

His website was used for a large proportion of fraudulent activity in the UK – but copycats have since taken its place, and others are still falling victim to these types of scams, experts have warned.

How victims were scammed

The number of people using iSpoof swelled to 69,000 at its peak, with as many as 20 people per minute targeted by callers using the site.

More than 10 million fraudulent calls were made using iSpoof in the year to August 2022 – 3.5 million of them in the UK, the prosecution said.  More than 200,000 victims in the UK – many of them elderly – lost £43m, while global losses exceeded £100m.

For a basic subscription fee of £150 a month, users got a set number of minutes to make automated bot calls using the website or app version. They could then pay extra for additional features

[…]

Often, victims would get an automated call prompting them to confirm a transaction on an account.

The website allowed them to intercept one-time passwords, which were “ironically” introduced by banks to increase their security measures, noted John Ojakovoh, prosecuting.

iSpoof offered scammers extra features that allowed victims to type in a telephone pincode after being prompted to do so by an automated call.

Users could also pay for the ability to monitor calls live, or place calls pretending to be from an establishment that had old card details on file and wanted new ones.

Scammers could control what the automated call would say to recipients and access tools such as voice recognition.

[…]

iSpoof had a channel on Telegram, a social media platform, which it used to communicate with its customers and promote itself, the prosecution said.

The Telegram channel also displayed advertisements from companies selling bank details.

Fletcher would use it to conduct “market research”, running polls to find out which features users wanted most.

[…]

Fletcher was not particularly tech-savvy, but he used a website called freelancer.com to hire programmers to make the “building blocks” of the site

[…]

His lawyer said he had initially set out to create a simple website, but his co-founder suggested ways the technology could be made more sophisticated, which spurred him on. In 2021, he and his co-founder “fell out” and Fletcher ousted him, replacing him with three other administrators that he appeared to be supervising.

[…]

When Fletcher assumed control of iSpoof, the profits received had a “meteoric rise” from 5 Bitcoin to 117, prosecutors said. Fletcher received 64.38 Bitcoin, worth just short of £2m.

How police cracked the case

Posing as iSpoof customers, police paid for a trial subscription in Bitcoin and tested the website. They traced the money they paid to iSpoof and eventually discovered that the “lion’s share” of the profits were going to Fletcher.

They obtained a copy of the website’s server, which revealed call logs that further incriminated Fletcher and the scammers using his website.

[…]

others are also being investigated. Some 120 suspected phone scammers have been arrested, 103 of them in London.

[…]

 

Source: How a 35-year-old weed smoker behind 10 million scam calls made his fortune

Online age verification is coming, and privacy is on the chopping block

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

“When you have so many proposals floating around, it’s hard to ensure that everything is constitutionally sound and actually effective for kids,” Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), tells The Verge. “Because it’s so difficult to identify who’s a kid online, it’s going to prevent adults from accessing content online as well.”

In the US and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple US federal bills are on the table, and so are laws in other countries, like the UK’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

Online age verification isn’t a new concept. In the US, laws like the Children’s Online Privacy Protection Act (COPPA) already apply special rules to people under 13. And almost everyone who has used the internet — including major platforms like YouTube and Facebook — has checked a box to access adult content or entered a birth date to create an account. But there’s also almost nothing to stop them from faking it.

As a result, lawmakers are calling for more stringent verification methods. “From bullying and sex trafficking to addiction and explicit content, social media companies subject children and teens to a wide variety of content that can hurt them, emotionally and physically,” Senator Tom Cotton (R-AR), the backer of the Protect Kids Online Act, said. “Just as parents safeguard their kids from threats in the real world, they need the opportunity to protect their children online.”

Age verification systems fall into a handful of categories. The most common option is to rely on a third party that knows your identity — by directly validating a credit card or government-issued ID, for instance, or by signing up for a digital intermediary like Allpasstrust, the service Louisianans must use for porn access.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

As pointed out by CNIL’s report on various online age verification options, all these methods have serious flaws. CNIL notes that identifying someone’s age with a credit card would be relatively easy since the security infrastructure is already there for online payments. But some adult users — especially those with lower incomes — may not have a card, which would seriously limit their ability to access online services. The same goes for verification methods using government-issued IDs. Children can also snap up a card that’s lying around the house to verify their age.

“As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on”

Similarly, the Congressional Research Service (CRS) has expressed concerns about online age verification. In a report it updated in March, the US legislature’s in-house research institute found that many kids aged 16 to 19 might not have a government-issued ID, such as a driver’s license, that they can use to verify their age online. While it says kids could use their student ID instead, it notes that they may be easier to fake than a government-issued ID. The CRS isn’t totally on board with relying on a national digital ID system for online age verification either, as it could “raise privacy and security concerns.”

Face-based age detection might seem like a quick fix to these concerns. And unlike a credit card — or full-fledged facial identification tools — it doesn’t necessarily tell a site who you are, just whether it thinks you’re over 18.

But these systems may not accurately identify the age of a person. Yoti, the facial analysis service used by Facebook and Instagram, claims it can estimate the age of people 13 to 17 years old as under 25 with 99.93 percent accuracy while identifying kids that are six to 11 years old as under 13 with 98.35 percent accuracy. This study doesn’t include any data on distinguishing between young teens and older ones, however — a crucial element for many young people.

Although Yoti claims its system has no “discernible bias across gender or skin tone,” previous research indicates that facial recognition services are less reliable for people of color, gender-nonconforming people, and people with facial differences or asymmetry. This would, again, unfairly block certain people from accessing the internet.

It also poses a host of privacy risks, as the companies that capture facial recognition data would need to ensure that this biometric data doesn’t get stolen by bad actors. UK civil liberties group Big Brother Watch argues that “face prints’ are as sensitive as fingerprints” and that “collecting biometric data of this scale inherently puts people’s privacy at risk.” CNIL points out that you could mitigate some risks by performing facial recognition locally on a user’s device — but that doesn’t solve the broader problems.

Inferring ages based on browsing history raises even more problems. This kind of inferential system has been implemented on platforms like Facebook and TikTok, both of which use AI to detect whether a user is under the age of 13 based on their activity on the platform. That includes scanning a user’s activity for “happy birthday” messages or comments that indicate they’re too young to have an account. But the system hasn’t been explored on a larger scale — where it could involve having an AI scan your entire browsing history and estimate your age based on your searches and the sites you interact with. That would amount to large-scale digital surveillance, and CNIL outright calls the system “intrusive.” It’s not even clear how well it would work.

In France, where lawmakers are working to restrict access to porn sites, CNIL worked with Ecole Polytechnique professor Olivier Blazy to develop a solution that attempts to minimize the amount of user information sent to a website. The proposed method involves using an ephemeral “token” that sends your browser or phone a “challenge” when accessing an age-restricted website. That challenge would then get relayed to a third party that can authenticate your age, like your bank, internet provider, or a digital ID service, which would issue its approval, allowing you to access the website.

The system’s goal is to make sure a user is old enough to access a service without revealing any personal details, either to the website they’re using or the companies and governments providing the ID check. The third party “only knows you are doing an age check but not for what,” Blazy explains to The Verge, and the website would not know which service verified your age nor any of the details from that transaction.

Blazy hopes this system can prevent very young children from accessing explicit content. But even with this complex solution, he acknowledges that users in France will be able to get around the method by using a virtual private network (VPN) to conceal their location. This is a problem that plagues nearly any location-specific verification system: as long as another government lets people access a site more easily, users can route their traffic through it. The only surefire solution would be draconian crackdowns on privacy tools that would dramatically compromise freedom online.

Some governments are trying to offer a variety of options and let users pick between them. A report from the European Parliament Think Tank, an in-house department that helps shape legislation, highlights an EU “browser-based interoperable age verification method” called euCONSENT, which will allow users to verify their identity online by choosing from a network of approved third-party services. Since this would give users the ability to choose the verification they want to use, this means one service might ask a user to upload an official government document, while another might rely on facial recognition.

To privacy and civil liberties advocates, none of these solutions are ideal. Venzke tells The Verge that implementing age verification systems encourages a system that collects our data and could pave the way for more surveillance in the future. “Bills that are trying to establish inferences about how old you are or who you are based on that already existing capitalistic surveillance, are just threatening to legitimize that surveillance,” Venzke says. “As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on.”

Age verification laws “are going to face a very tough battle in court”

The Electronic Frontier Foundation, a digital rights group, similarly argues that all age verification solutions are “surveillance systems” that will “lead us further towards an internet where our private data is collected and sold by default.”

Even some strong supporters of child safety bills have expressed concerns about making age verification part of them. Senator Richard Blumenthal (D-CT), one of the backers of the Kids Online Safety Act, objected to the idea in a call with reporters earlier this month. In a statement, he tells The Verge that “age verification would require either a national database or a goldmine of private information on millions of kids in Big Tech’s hands” and that “the potential for exploitation and misuse would be huge.” (Despite this, the EFF believes that KOSA’s requirements would inevitably result in age verification mandates anyway.)

In the US, it’s unclear whether online age verification would stand up under legal scrutiny at all. The US court system has already struck down efforts to implement online age verification several times in the past. As far back as 1997, the Supreme Court ruled parts of the 1996 Communications Decency Act unconstitutional, as it imposed restrictions on “knowing transmission of obscene or indecent messages” and required age verification online. More recently, a federal court found in 2016 that a Louisiana law, which required websites that publish “material harmful to minors” verify users’ ages, “creates a chilling effect on free speech.”

Vera Eidelman, a staff attorney with ACLU, tells The Verge that existing age verification laws “are going to face a very tough battle in court.” “For the most part, requiring content providers online to verify the ages of their users is almost certainly unconstitutional, given the likelihood but it will make people uncomfortable to exercise their rights to access certain information if they have to unmask or identify themselves,” Eidelman says.

But concerns over surveillance still haven’t stopped governments around the globe, including here in the US, from pushing ahead with online age verification mandates. There are currently several bills in the pipeline in Congress that are aimed at protecting children online, including the Protecting Kids on Social Media Act, which calls for the test of a national age verification system that would block users under the age of 13 from signing up for social media. In the UK, where the heavily delayed Online Safety Bill will likely become law, porn sites would be required to verify users’ ages, while other websites would be forced to give users the option to do so as well.

Some proponents of online safety laws say they’re no different than having to hand over an ID to purchase alcohol. “We have agreed as a society not to let a 15-year-old go to a bar or a strip club,” said Laurie Schlegel, the legislator behind Louisiana’s age restriction law, after its passage. “The same protections should be in place online.” But the comparison misses vastly different implications for free speech and privacy. “When we think about bars or ordering alcohol at a restaurant, we just assume that you can hand an ID to a bouncer or a waiter, they’ll hand it back, and that’s the end of it,” Venzke adds. “Problem is, there’s no infrastructure on the internet right now to [implement age verification] in a safe, secure, private way that doesn’t chill people’s ability to get to constitutionally protected speech.”

Most people also spend a relatively small amount of their time in real-world adults-only spaces, while social media and online communications tools are ubiquitous ways of finding information and staying in touch with friends and family. Even sites with sexually explicit content — the target of Louisiana’s bill — could be construed to include sites offering information about sexual health and LGBTQ resources, despite claims by lawmakers that this won’t happen.

Even if many of these rules are shot down, the way we use the internet may never be the same again. With age checks awaiting us online, some people may find themselves locked out of increasingly large numbers of platforms — leaving the online world more closed-off than ever.

Source: Online age verification is coming, and privacy is on the chopping block – The Verge

The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’, apparently made by blind judges

The Supreme Court has ruled that Andy Warhol has infringed on the copyright of Lynn Goldsmith, the photographer who took the image that he used for his famous silkscreen of the musician Prince. Goldsmith won the justices over 7-2, disagreeing with Warhol’s camp that his work was transformative enough to prevent any copyright claims. In the majority opinion written by Justice Sonia Sotomayor, she noted that “Goldsmith’s original works, like those of other photographers, are entitled to copyright protection, even against famous artists.”

Goldsmith’s story goes as far back as 1984, when Vanity Fair licensed her Prince photo for use as an artist reference. The photographer received $400 for a one-time use of her photograph, which Warhol then used as the basis for a silkscreen that the magazine published. Warhol then created 15 additional works based on her photo, one of which was sold to Condé Nast for another magazine story about Prince. The Andy Warhol Foundation (AWF) — the artist had passed away by then — got $10,000 it, while Goldsmith didn’t get anything.

Typically, the use of copyrighted material for a limited and “transformative” purpose without the copyright holder’s permission falls under “fair use.” But what passes as “transformative” use can be vague, and that vagueness has led to numerous lawsuits. In this particular case, the court has decided that adding “some new expression, meaning or message” to the photograph does not constitute “transformative use.” Sotomayor said Goldsmith’s photo and Warhol’s silkscreen serve “substantially the same purpose.”

Indeed, the decision could have far ranging implications for fair use and could influence future cases on what constitutes as transformative work. Especially now that we’re living in the era of content creators who could be taking inspiration from existing music and art. As CNN reports, Justice Elena Kagan strongly disagreed with her fellow justices, arguing that the decision would stifle creativity. She said the justices mostly just cared about the commercial purpose of the work and did not consider that the photograph and the silkscreen have different “aesthetic characteristics” and did not “convey the same meaning.”

“Both Congress and the courts have long recognized that an overly stringent copyright regime actually stifles creativity by preventing artists from building on the works of others. [The decision will] impede new art and music and literature, [and it will] thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer,” she wrote.

The justices who wrote the majority opinion, however, believe that it “will not impoverish our world to require AWF to pay Goldsmith a fraction of the proceeds from its reuse of her copyrighted work. Recall, payments like these are incentives for artists to create original works in the first place.”

Source: The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’

Well, the two pictures are above. How you can argue that they are the same thing is quite beyond me.

Automakers Are Making Basic Car Functions A Costly Subscription Service… Whether You Like It Or Not

Automakers are increasingly obsessed with turning everything into a subscription service in a bid to boost quarterly returns. We’ve noted how BMW has embraced making heated seats and other features already in your car a subscription service, and Mercedes has been making better gas and EV engine performance something you have to pay extra for — even if your existing engine already technically supports it.

There are several problems here. One, most of the tech they want to charge a recurring fee to use is already embedded in the car you own. And its cost is already rolled into the retail cost you’ve paid. They’re effectively disabling technology you already own, then charging you a recurring additional monthly fee just to re-enable it. It’s a Cory Doctorow nightmare dressed up as innovation.

The other problem: absolutely nobody wants this shit. Surveys have already shown how consumers widely despise paying their car maker a subscription fee for pretty much anything, whether that’s an in-car 5G hotspot or movie rentals via your car’s screen. Now another new study indicates that consumers are unsurprisingly opposed to this new effort to expand subscription features:

new study from Cox Automotive this week found that 75% of respondents agreed with the statement that “features on demand will allow automakers to make more money.” And 69% of respondents said that if certain features were available only via subscription for a particular brand, they would likely shop elsewhere.

[…]

if the industry does this persistently enough, over a long enough time frame, the window of what dictates “acceptable” automaker behavior shifts in their favor, resulting in opinions like this one:

“I don’t think [features on demand] is going away, and also as the cars get more and more sophisticated, get more and more functionality, then it just feels like a natural progression,” Edmund’s Weaver says, also noting he too has gotten used to these add-on features, and their costs, for his personal vehicle.

There’s a whole bunch of additional unintentional consequences of this kind of shift. Right to repair folks will be keen on breaking down these phony barriers, and automakers will increasingly respond by doing things like making enabling tech you already own and paid for a warranty violation.

[…]

Source: Automakers Are Making Basic Car Functions A Costly Subscription Service… Whether You Like It Or Not | Techdirt

It’s not just BMW, Mercedes and many other companies are getting into this game. The thing is, if it’s a service that requires ongoing work (eg collecting road data for navigation services or traffic cam data for speed warnings etc) then a subscription is fine. But if it’s something already built into your car that requires a subscription or extra money to enable, well, then you’ve already paid for it and are the owner of it. Having a carmaker disable it until you pony up again for it is a ridiculous.

Logitech partners with iFixit for self-repairs

Hanging on to your favorite wireless mouse just got a little easier thanks to a new partnership between Logitech and DIY repair specialists iFixit. The two companies are working together to reduce unnecessary e-waste and help customers repair their own out-of-warranty Logitech hardware by supplying spare parts, batteries, and repair guides for “select products.”

Everything will eventually be housed in the iFixit Logitech Repair Hub, with parts available to purchase as needed or within “Fix Kits” that provide everything needed to complete the repair, such as tools and precision bit sets.

Starting “this summer,” Logitech’s MX Master and MX Anywhere mouse models will be the first products to receive spare parts. Pricing information has not been disclosed yet, and Logitech hasn’t mentioned any other devices that will receive the iFixit genuine replacement parts and repair guide treatment.

[…]

Source: Logitech partners with iFixit for self-repairs

This sounds like a good idea, and I hope it is, but who else can supply repair kits? If it’s only IFixit, then aren’t we swapping one monopoly for another? It’s a kind of symbolic fixability. I love iFixit, they are great and I really like what they have done in the past, but I really hope that it’s not the intent to create a reparation duopoly to which big companies can point and say: “see, we are not a monopoly” whilst keeping prices artificially high.

Human DNA can be pulled from the air: A Boon For Science, While Terrifying Others

Environmental DNA sampling is nothing new. Rather than having to spot or catch an animal, instead the DNA from the traces they leave can be sampled, giving clues about their genetic diversity, their lineage (e.g. via mitochondrial DNA) and the population’s health. What caught University of Florida (UoF) researchers by surprise while they were using environmental DNA sampling to study endangered sea turtles, was just how much human DNA they found in their samples. This led them to perform a study on the human DNA they sampled in this way, with intriguing implications.

Ever since genetic sequencing became possible there have been many breakthroughs that have made it more precise, cheaper and more versatile. The argument by these UoF researchers in their paper in Nature Ecology & Evolution is that although there is a lot of potential in sampling human environmental DNA (eDNA) to study populations much like is done today already with wastewater sampling, only more universally. This could have great benefits in studying human populations much how we monitor other animal species already using their eDNA and similar materials that are discarded every day as a part of normal biological function.

The researchers were able to detect various genetic issues in the human eDNA they collected, demonstrating the viability of using it as a population health monitoring tool. The less exciting fallout of their findings was just how hard it is to prevent contamination of samples with human DNA, which could possibly affect studies. Meanwhile the big DNA elephant in the room is that of individual level tracking, which is something that’s incredibly exciting to researchers who are monitoring wild animal populations. Unlike those animals, however, homo sapiens are unique in that they’d object to such individual-level eDNA-based monitoring.

What the full implications of such new tools will be is hard to say, but they’re just one of the inevitable results as our genetic sequencing methods improve and humans keep shedding their DNA everywhere.

Source: Human DNA Is Everywhere: A Boon For Science, While Terrifying Others | Hackaday

The ‘invisible’ cellulose coatings that mitigate surface transmission of pathogens (kills covid on door handles)

Research has shown that a thin cellulose film can inactivate the SARS-CoV-2 virus within minutes, inhibit the growth of bacteria including E. coli, and mitigate contact transfer of pathogens.

The coating consists of a thin film of cellulose fiber that is invisible to the , and is abrasion-resistant under dry conditions, making it suitable for use on high traffic objects such as door handles and handrails.

The coating was developed by scientific teams from the University of Birmingham, Cambridge University, and FiberLean Technologies, who worked on a project to formulate treatments for glass, metal or laminate surfaces that would deliver long-lasting protection against the COVID-19 virus.

[…]

a coating made from micro-fibrillated cellulose (MFC)

[…]

The COVID-19 virus is known to remain active for several days on surfaces such as plastic and stainless steel, but for only a few hours on newspaper.

[…]

The researchers found that the porous nature of the film plays a significant role: it accelerates the evaporation rate of liquid , and introduces an imbalanced osmotic pressure across bacteria membrane.

They then tested whether the coating could inhibit surface transmission of SARS-CoV-2. Here they found a three-fold reduction of infectivity when droplets containing the were left on the coating for 5 minutes, and, after 10 minutes, the infectivity fell to zero.

[…]

Professor Zhang commented, “The risk of surface transmission, as opposed to aerosol transmission, comes from large droplets which remain infective if they land on hard surfaces, where they can be transferred by touch. This surface coating technology uses sustainable materials and could potentially be used in conjunction with other antimicrobial actives to deliver a long-lasting and slow-release antimicrobial effect.”

The researchers confirmed the stability of the coating by mechanical scraping tests, where the showed no noticeable damage when dry, but easy removal from the surface when wetted, making it convenient and suitable for daily cleaning and disinfection practice.

The paper is published in the journal ACS Applied Materials & Interfaces.

More information: Shaojun Qi et al, Porous Cellulose Thin Films as Sustainable and Effective Antimicrobial Surface Coatings, ACS Applied Materials & Interfaces (2023). DOI: 10.1021/acsami.2c23251

Source: The ‘invisible’ cellulose coatings that mitigate surface transmission of pathogens

LLM emergent behavior written off as rubbish – small models work fine but are measured poorly

[…] As defined in academic studies, “emergent” abilities refers to “abilities that are not present in smaller-scale models, but which are present in large-scale models,” as one such paper puts it. In other words, immaculate injection: increasing the size of a model infuses it with some amazing ability not previously present.

[…]

those emergent abilities in AI models are a load of rubbish, say computer scientists at Stanford.

Flouting Betteridge’s Law of Headlines, Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo answer the question posed by their paper, Are Emergent Abilities of Large Language Models a Mirage?, in the affirmative.

[…]

When industry types talk about emergent abilities, they’re referring to capabilities that seemingly come out of nowhere for these models, as if something was being awakened within them as they grow in size. The thinking is that when these LLMs reach a certain scale, the ability to summarize text, translate languages, or perform complex calculations, for example, can emerge unexpectedly.

[…]

Stanford’s Schaeffer, Miranda, and Koyejo propose that when researchers are putting models through their paces and see unpredictable responses, it’s really due to poorly chosen methods of measurement rather than a glimmer of actual intelligence.

Most (92 percent) of the unexpected behavior detected, the team observed, was found in tasks evaluated via BIG-Bench, a crowd-sourced set of more than 200 benchmarks for evaluating large language models.

One test within BIG-Bench highlighted by the university trio is Exact String Match. As the name suggests, this checks a model’s output to see if it exactly matches a specific string without giving any weight to nearly right answers. The documentation even warns:

The EXACT_STRING_MATCH metric can lead to apparent sudden breakthroughs because of its inherent all-or-nothing discontinuity. It only gives credit for a model output that exactly matches the target string. Examining other metrics, such as BLEU, BLEURT, or ROUGE, can reveal more gradual progress.

The issue with using such pass-or-fail tests to infer emergent behavior, the researchers say, is that nonlinear output and lack of data in smaller models creates the illusion of new skills emerging in larger ones. Simply put, a smaller model may be very nearly right in its answer to a question, but because it is evaluated using the binary Exact String Match, it will be marked wrong whereas a larger model will hit the target and get full credit.

It’s a nuanced situation. Yes, larger models can summarize text and translate languages. Yes, larger models will generally perform better and can do more than smaller ones, but their sudden breakthrough in abilities – an unexpected emergence of capabilities – is an illusion: the smaller models are potentially capable of the same sort of thing but the benchmarks are not in their favor. The tests favor larger models, leading people in the industry to assume the larger models enjoy a leap in capabilities once they get to a certain size.

In reality, the change in abilities is more gradual as you scale up or down. The upshot for you and I is that applications may not need a huge but super powerful language model; a smaller one that is cheaper and faster to customize, test, and run may do the trick.

[…]

In short, the supposed emergent abilities of LLMs arise from the way the data is being analyzed and not from unforeseen changes to the model as it scales. The researchers emphasize they’re not precluding the possibility of emergent behavior in LLMs; they’re simply stating that previous claims of emergent behavior look like ill-considered metrics.

[…]

Source: LLM emergent behavior written off as ‘a mirage’ by study • The Register

Fallout continues from fake net neutrality comments

Three digital marketing firms have agreed to pay $615,000 to resolve allegations that they submitted at least 2.4 million fake public comments to influence American internet policy.

New York Attorney General Letitia James announced last week the agreement with LCX, Lead ID, and Ifficient, each of which was found to have fabricated public comments submitted in 2017 to convince the Federal Communications Commission (FCC) to repeal net neutrality.

Net neutrality refers to a policy requiring internet service providers to treat people’s internet traffic more or less equally, which some ISPs opposed because they would have preferred to act as gatekeepers in a pay-to-play regime. The neutrality rules were passed in 2015 at a time when it was feared large internet companies would eventually eradicate smaller rivals by bribing ISPs to prioritize their connections and downplay the competition.

[…]

in 2017 Ajit Pai, appointed chairman of the FCC by the Trump administration, successfully spearheaded an effort to tear up those rules and remake US net neutrality so they’d be more amenable to broadband giants. And there was a public comment period on initiative.

It was a massive sham. The Office of the Attorney General (OAG) investigation [PDF] found that 18 million of 22 million comments submitted to the FCC were fake, both for and against net neutrality.

The broadband industry’s attempt in 2017 to have the FCC repeal the net neutrality rules accounted for more than 8.5 million fake comments at a cost of $4.2 million.

“The effort was intended to create the appearance of widespread grassroots opposition to existing net neutrality rules, which — as described in an internal campaign planning document — would help provide ‘cover’ for the FCC’s proposed repeal,” the report explained.

The report also stated an unidentified 19-year-old was responsible for more than 7.7 million of 9.3 million fake comments opposing the repeal of net neutrality. These were generated using software that fabricated identities. The origin of the other 1.6 million fake comments is unknown.

LCX, Lead ID, and Ifficient were said to have taken a different approach, one that allegedly involved reuse of old consumer data from different marketing or advocacy campaigns, purchased or obtained through misrepresentation. LCX is said to have obtained some of its data from “a large data breach file found on the internet.”

[…]

This was the second such agreement for the state of New York, which two years ago got a different set of digital marketing firms – Fluent, Opt-Intelligence, and React2Media – to pay $4.4 million to disgorge funds earned for distributing about 5.4 million fake public comments related to the FCC’s net neutrality process.

[…]

astroturfing – corporate messaging masquerading as grassroots public opinion.

[…]

“no federal laws or regulations exist that limit a public relations firm’s ability to engage in astroturfing.”

[…]

Source: Fallout continues from ‘fake net neutrality comment’ claims • The Register

Ex-Ubiquiti engineer behind “breathtaking” data theft, attempts to frame co-workers, calls it a security drill, assaults stock price: 6-year prison term

An ex-Ubiquiti engineer, Nickolas Sharp, was sentenced to six years in prison yesterday after pleading guilty in a New York court to stealing tens of gigabytes of confidential data, demanding a $1.9 million ransom from his former employer, and then publishing the data publicly when his demands were refused.

[…]

In a court document, Sharp claimed that Ubiquiti CEO Robert Pera had prevented Sharp from “resolving outstanding security issues,” and Sharp told the judge that this led to an “idiotic hyperfixation” on fixing those security flaws.

However, even if that was Sharp’s true motivation, Failla did not accept his justification of his crimes, which include wire fraud, intentionally damaging protected computers, and lying to the FBI.

“It was not up to Mr. Sharp to play God in this circumstance,” Failla said.

US attorney for the Southern District of New York, Damian Williams, argued that Sharp was not a “cybersecurity vigilante” but an “inveterate liar and data thief” who was “presenting a contrived deception to the Court that this entire offense was somehow just a misguided security drill.” Williams said that Sharp made “dozens, if not hundreds, of criminal decisions” and even implicated innocent co-workers to “divert suspicion.” Sharp also had already admitted in pre-sentencing that the cyber attack was planned for “financial gain.” Williams said Sharp did it seemingly out of “pure greed” and ego because Sharp “felt mistreated”—overworked and underpaid—by the IT company, Williams said.

Court documents show that Ubiquiti spent “well over $1.5 million dollars and hundreds of hours of employee and consultant time” trying to remediate what Williams described as Sharp’s “breathtaking” theft. But the company lost much more than that when Sharp attempted to conceal his crimes—posing as a whistleblower, planting false media reports, and contacting US and foreign regulators to investigate Ubiquiti’s alleged downplaying of the data breach. Within a single day after Sharp planted false reports, stocks plummeted, causing Ubiquiti to lose over $4 billion in market capitalization value, court documents show.

[…]

In his sentencing memo, Williams said that Sharp’s characterization of the cyberattack as a security drill does not align with the timeline of events leading up to his arrest in December 2021. The timeline instead appears to reveal a calculated plan to conceal the data theft and extort nearly $2 million from Ubiquiti.

Sharp began working as a Ubiquiti senior software engineer and “Cloud Lead” in 2018, where he was paid $250,000 annually and had tasks including software development and cloud infrastructure security. About two years into the gig, Sharp purchased a VPN subscription to Surfshark in July 2020 and then seemingly began hunting for another job. By December 9, 2020, he’d lined up another job. The next day, he used his Ubiquiti security credentials to test his plan to copy data repositories while masking his IP address by using Surfshark.

Less than two weeks later, Sharp executed his plan, and he might have gotten away with it if not for a “slip-up” he never could have foreseen. While copying approximately 155 data repositories, an Internet outage temporarily disabled his VPN. When Internet service was restored, unbeknownst to Sharp, Ubiquiti logged his home IP address before the VPN tool could turn back on.

Two days later, Sharp was so bold as to ask a senior cybersecurity employee if he could be paid for submitting vulnerabilities to the company’s HackerOne bug bounty program, which seemed suspicious, court documents show. Still unaware of his slip-up, through December 26, 2020, Sharp continued to access company data using Surfshark, actively covering his trails by deleting evidence of his activity within a day and modifying evidence to make it seem like other Ubiquiti employees were using the credentials he used during the attack.

Sharp only stopped accessing the data when other employees discovered evidence of the attack on December 28, 2020. Seemingly unfazed, Sharp joined the team investigating the attack before sending his ransom email on January 7, 2021.

Ubiquiti chose not to pay the ransom and instead got the FBI involved. Soon after, Sharp’s slip-up showing his home IP put the FBI on his trail. At work, Sharp suggested his home IP was logged in an attempt to frame him, telling coworkers, “I’d be pretty fucking incompetent if I left my IP in [the] thing I requested, downloaded, and uploaded” and saying that would be the “shittiest cover up ever lol.”

While the FBI analyzed all of Sharp’s work devices, Sharp wiped and reset the laptop he used in the attack but brazenly left the laptop at home, where it was seized during a warranted FBI search in March 2021.

After the FBI search, Sharp began posing as a whistleblower, contacting journalists and regulators to falsely warn that Ubiquiti’s public disclosure and response to the cyberattack were insufficient. He said the company had deceived customers and downplayed the severity of the breach, which was actually “catastrophic.” The whole time, Williams noted in his sentencing memo, Sharp knew that the attack had been accomplished using his own employee credentials.

This was “far from a hacker targeting a vulnerability open to third parties,” Williams said. “Sharp used credentials legitimately entrusted to him by the company, to steal data and cover his tracks.”

“At every turn, Sharp acted consistent with the unwavering belief that his sophistication and cunning were sufficient to deceive others and conceal his crime,” Williams said.

[…]

Source: Ex-Ubiquiti engineer behind “breathtaking” data theft gets 6-year prison term | Ars Technica

Fake scientific papers are alarmingly common and becoming more so

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.

[…]

Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.

Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.

[…]

To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.

Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.

[…]

STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.

[…]

Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.

The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.

[…]

Source: Fake scientific papers are alarmingly common | Science | AAAS

A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.