Uncle Sam paid to develop a cancer drug and now one guy will get to charge whatever he wants for it

The argument for pharma patents: making new medicines is expensive, and medicines are how we save ourselves from cancer and other diseases. Therefore, we will award government-backed monopolies – patents – to pharma companies so they will have an incentive to invest their shareholders’ capital in research.

There’s plenty wrong with this argument. For one thing, pharma companies use their monopoly winnings to sell drugs, not invent drugs. For every dollar pharma spends on research, it spends three dollars on marketing:

https://www.bu.edu/sph/files/2015/05/Pharmaceutical-Marketing-and-Research-Spending-APHA-21-Oct-01.pdf

And that “R&D” isn’t what you’re thinking of, either. Most R&D spending goes to “evergreening” – coming up with minor variations on existing drugs in a bid to extend those patents for years or decades:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3680578/

Evergreening got a lot of attention recently when John Green rained down righteous fire upon Johnson & Johnson for their sneaky tricks to prevent poor people from accessing affordable TB meds, prompting this excellent explainer from the Arm and A Leg Podcast:

https://armandalegshow.com/episode/john-green-part-1/

Another thing those monopoly profits are useful for: “pay for delay,” where pharma companies bribe generic manufacturers not to make cheap versions of drugs whose patents have expired. Sure, it’s illegal, but that doesn’t stop ’em:

https://www.ftc.gov/news-events/topics/competition-enforcement/pay-delay

But it’s their money, right? If they want to spend it on bribes or evergreening or marketing, at least some of that money is going into drugs that’ll keep you and the people you love from enduring unimaginable pain or dying slowly and hard. Surely that warrants a patent.

Let’s say it does. But what about when a pharma company gets a patent on a life-saving drug that the public paid to develop, test and refine? Publicly funded work is presumptively in the public domain, from NASA R&D to the photos that park rangers shoot of our national parks. The public pays to produce this work, so it should belong to the public, right?

That was the deal – until Congress passed the Bayh-Dole Act in 1980. Under Bayh-Dole, government-funded inventions are given away – to for-profit corporations, who get to charge us whatever they want to access the things we paid to make. The basis for this is a racist hoax called “The Tragedy Of the Commons,” written by the eugenicist white supremacist Garrett Hardin and published by Science in 1968:

https://memex.craphound.com/2019/10/01/the-tragedy-of-the-commons-how-ecofascism-was-smuggled-into-mainstream-thought/

Hardin invented an imaginary history in which “commons” – things owned and shared by a community – are inevitably overrun by selfish assholes, a fact that prompts nice people to also overrun these commons, so as to get some value out of them before they are gobbled up by people who read Garrett Hardin essays.

Hardin asserted this as a historical fact, but he cited no instances in which it happened. But when the Nobel-winning Elinor Ostrom actually went and looked at how commons are managed, she found that they are robust and stable over long time periods, and are a supremely efficient way of managing resources:

https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions

The reason Hardin invented an imaginary history of tragic commons was to justify enclosure: moving things that the public owned and used freely into private ownership. Or, to put it more bluntly, Hardin invented a pseudoscientific justification for giving away parks, roads and schools to rich people and letting them charge us to use them.

To arrive at this fantasy, Hardin deployed one of the most important analytical tools of modern economics: introspection. As Ely Devons put it: “If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, ‘What would I do if I were a horse?’”

https://pluralistic.net/2022/10/27/economism/#what-would-i-do-if-i-were-a-horse

Hardin’s hoax swept from the fringes to the center and became received wisdom – so much so that by 1980, Senators Birch Bayh and Bob Dole were able to pass a law that gave away publicly funded medicine to private firms, because otherwise these inventions would be “overgrazed” by greedy people, denying the public access to livesaving drugs.

On September 21, the NIH quietly published an announcement of one of these pharmaceutical transfers, buried in a list of 31 patent assignments in the Federal Register:

https://public-inspection.federalregister.gov/2023-20487.pdf

The transfer in question is a patent for using T-cell receptors (TCRs) to treat solid tumors from HPV, one of the only patents for treating solid tumors with TCRs. The beneficiary of this transfer is Scarlet TCR, a Delaware company with no website or SEC filings and ownership shrouded in mystery:

https://www.bizapedia.com/de/scarlet-tcr-inc.html

One person who pays attention to this sort of thing is James Love, co-founder of Knowledge Ecology International, a nonprofit that has worked for decades for access to medicines. Love sleuthed out at least one person behind Scarlet TCR: Christian Hinrichs, a researcher at Rutgers who used to work at the NIH’s National Cancer Institute:

https://www.nih.gov/research-training/lasker-clinical-research-scholars/tenured-former-scholars

Love presumes Hinrichs is the owner of Scarlet TCR, but neither the NIH nor Scarlet TCR nor Hinrichs will confirm it. Hinrichs was one of the publicly-funded researchers who worked on the new TCR therapy, for which he received a salary.

This new drug was paid for out of the public purse. The basic R&D – salaries for Hinrichs and his collaborators, as well as funding for their facilities – came out of NIH grants. So did the funding for the initial Phase I trial, and the ongoing large Phase II trial.

As David Dayen writes in The American Prospect, the proposed patent transfer will make Hinrichs a very wealthy man (Love calls it “generational wealth”):

https://prospect.org/health/2023-10-18-nih-how-to-become-billionaire-program/

This wealth will come by charging us – the public – to access a drug that we paid to produce. The public took all the risks to develop this drug, and Hinrichs stands to become a billionaire by reaping the rewards – rewards that will come by extracting fortunes from terrified people who don’t want to die from tumors that are eating them alive.

The transfer of this patent is indefensible. The government isn’t even waiting until the Phase II trials are complete to hand over our commonly owned science.

But there’s still time. The NIH is about to get a new director, Monica Bertagnolli – Hinrichs’s former boss – who will need to go before the Senate Health, Education, Labor and Pensions Committee for confirmation. Love is hoping that the confirmation hearing will present an opportunity to question Bertagnolli about the transfer – specifically, why the drug isn’t being nonexclusively licensed to lots of drug companies who will have to compete to sell the cheapest possible version.

Source: Pluralistic: Uncle Sam paid to develop a cancer drug and now one guy will get to charge whatever he wants for it (19 Oct 2023) – Pluralistic: Daily links from Cory Doctorow

Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

Faster-Than-Light ‘Quasiparticles’ Touted as Futuristic Light Source

[…]But these light sources [needed to experiment in the quantum realm] are not common. They’re expensive to build, require large amounts of land, and can be booked up by scientists months in advance. Now, a team of physicists posit that quasiparticles—groups of electrons that behave as if they were one particle—can be used as light sources in smaller lab and industry settings, making it easier for scientists to make discoveries wherever they are. The team’s research describing their findings is published today in Nature Photonics.

“No individual particles are moving faster than the speed of light, but features in the collection of particles can, and do,” said John Palastro, a physicist at the Laboratory for Laser Energetics at the University of Rochester and co-author of the new study, in a video call with Gizmodo. “This does not violate any rules or laws of physics.”

[…]

In their paper, the team explores the possibility of making plasma accelerator-based light sources as bright as larger free electron lasers by making their light more coherent, vis-a-vis quasiparticles. The team ran simulations of quasiparticles’ properties in a plasma using supercomputers made available by the European High Performance Computing Joint Undertaking (EuroHPC JU), according to a University of Rochester release.

[…]

In a linear accelerator, “every electron is doing the same thing as the collective thing,” said Bernardo Malaca, a physicist at the Instituto Superior Técnico in Portugal and the study’s lead author, in a video call with Gizmodo. “There is no electron that’s undulating in our case, but we’re still making an undulator-like spectrum.”

The researchers liken quasiparticles to the Mexican wave, a popular collective behavior in which sports fans stand up and sit down in sequence. A stadium full of people can give the illusion of a wave rippling around the venue, though no one person is moving laterally.

“One is clearly able to see that the wave could in principle travel faster than any human could, provided the audience collaborates. Quasiparticles are very similar, but the dynamics can be more extreme,” said co-author Jorge Vieira, also a physicist at the Instituto Superior Técnico, in an email to Gizmodo. “For example, single particles cannot travel faster than the speed of light, but quasiparticles can travel at any velocity, including superluminal.”

“Because quasiparticles are a result of a collective behavior, there are no limits for its acceleration,” Vieira added. “In principle, this acceleration could be as strong as in the vicinity of a black-hole, for example.”

[…]

The difference between what is perceptually happening and actually happening regarding traveling faster than light is an “unneeded distinction,” Malaca said. “There are actual things that travel faster than light, which are not individual particles, but are waves or current profiles. Those travel faster than light and can produce real faster-than-light-ish effects. So you measure things that you only associate with superluminal particles.”

The group found that the electrons’ collective quality doesn’t have to be as pristine as the beams produced by large facilities, and could practically be implemented in more “table-top” settings, Palastro said. In other words, scientists could run experiments using very bright light sources on-site, instead of having to wait for an opening at an in-demand linear accelerator.

Source: Faster-Than-Light ‘Quasiparticles’ Touted as Futuristic Light Source

Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – this should be everywhere globally

In July, Seattle-based and tech-backed nonprofit Code.org announced its 10th policy recommendation for all states “to require all students to take computer science (CS) to earn a high school diploma.” In August, Washington State Senator Lisa Wellman phoned-in her plans to introduce a bill to make computer science a Washington high school graduation requirement to the state’s Board of Education, indicating that the ChatGPT-sparked AI craze and Code.org had helped convince her of the need. Wellman, a former teacher who worked as a Programmer/System Analyst in the 80’s before becoming an Apple VP (Publishing) in the ’90s, also indicated that exposure to CS given to students in fifth grade could be sufficient to satisfy a HS CS requirement. In 2019, Wellman sponsored Microsoft-supported SB 5088 (Bill details), which required all Washington state public high schools to offer a CS class. Wellman also sponsored SB 5299 in 2021, which allows high school students to take a computer science elective in place of a third year math or science course (that may be required for college admission) to count towards graduation requirements.

And in October, Code.org CEO Hadi Partovi appeared before the Washington State Board of Education, driving home points Senator Wellman made in August with a deck containing slides calling for Washington to “require that all students take computer science to earn a high school diploma” and to “require computer science within all teacher certifications.” Like Wellman, Partovi suggested the CS high school requirement might be satisfied by middle school work (he alternatively suggested one year of foreign language could be dropped to accommodate a HS CS course). Partovi noted that Washington contained some of the biggest promoters of K-12 CS in Microsoft Philanthropies’ TEALS (TEALS founder Kevin Wang is a member of the Washington State Board of Education) and Code.org, as well some of the biggest funders of K-12 CS in Amazon and Microsoft — both which are $3,000,000+ Platinum Supporters of Code.org and have top execs on Code.org’s Board of Directors.

Source: Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – Slashdot

Most kids have no clue how a computer works, let alone how to program one. It’s not difficult but an essential skill in today’s society.

IBM chip speeds up AI by combining processing and memory in the core

 

Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”

NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.

[…]

NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.

The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.

[…]

NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.

Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”

[…]

Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.

Source: ‘Mind-blowing’ IBM chip speeds up AI

Equifax poked with paltry $13.4 million following 147m customer data breach in 2017

Credit bureau company, Equifax, has been fined US$13.4 million by The Financial Conduct Authority (FCA), a UK financial watchdog, following its involvement in “one of the largest” data breaches ever.

This cyber security incident took place in 2017 and saw Equifax’s US-based parent company, Equifax Inc., suffer a data breach that saw the personal data of up to 147.9 million customers accessed by malicious actors during the hack. The FCA also revealed that, as this data was stored in company servers in the US, the hack also exposed the personal data of 13.8 million UK customers.

The data accessed during the hack included Equifax membership login details, customer names, dates of birth, partial credit card details and addresses.

According the FCA, the cyber attack and subsequent data breach was “entirely preventable” and exposed UK customers to financial crime.
“There were known weaknesses in Equifax Inc’s data security systems and Equifax failed to take appropriate action in response to protect UK customer data,” the FCA explained.

The authority also noted that the UK arm of Equifax was not made aware that malicious actors had been accessed during the hack until six weeks after the cyber security incident was discovered by Equifax Inc.

The company was fined $60,727 by the British Information Commissioner’s Office (ICO) relating to the data breach in 2018.

On October 13th, Equifax stated that it had fully cooperated with the FCA during the investigation, which has been extensive. The FCA also said that the fine levelled at Equifax Inc had been reduced following the company’s agreement to cooperate with the watchdog and resolve the cyber attack.

Patricio Remon, president for Europe at Equifax, said that since the cyber attack against Equifax in 2017, the company has “invested over $1.5 billion in a security and technology transformation”. Remon also said that “few companies have invested more time and resources than Equifax to ensure that consumers’ information is protected”.

Source: Equifax fined $13.4 million following data breach

Cisco Can’t Stop Using Hard-Coded Passwords

There’s a new Cisco vulnerability in its Emergency Responder product:

This vulnerability is due to the presence of static user credentials for the root account that are typically reserved for use during development. An attacker could exploit this vulnerability by using the account to log in to an affected system. A successful exploit could allow the attacker to log in to the affected system and execute arbitrary commands as the root user.

This is not the first time Cisco products have had hard-coded passwords made public. You’d think it would learn.

Source: Cisco Can’t Stop Using Hard-Coded Passwords – Schneier on Security

Google’s AI stoplight program leads to less stops, less emissions

It’s been two years since Google first debuted Project Green Light, a novel means of addressing the street-level pollution caused by vehicles idling at stop lights.

[…]

Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there.

[…]

When the program was first announced in 2021, it had only been pilot tested in four intersections in Israel in partnership with the Israel National Roads Company but Google had reportedly observed a “10 to 20 percent reduction in fuel and intersection delay time” during those tests. The pilot program has grown since then, spreading to a dozen partner cities around the world, including Rio de Janeiro, Brazil; Manchester, England and Jakarta, Indonesia.

“Today we’re happy to share that… we plan to scale to more cities in 2024,” Yael Maguire, Google VP of Geo Sustainability, told reporters during a pre-brief event last week. “Early numbers indicate a potential for us to see a 30 percent reduction in stops.

[…]

“Our AI recommendations work with existing infrastructure and traffic systems,” Maguire continued. “City engineers are able to monitor the impact and see results within weeks.” Maguire also noted that the Manchester test reportedly saw improvements to emission levels and air quality rise by as much as 18 percent. The company also touted the efficacy of its Maps routing in reducing emissions, with Maguire pointing out at it had “helped prevent more than 2.4 million metric tons of carbon emissions — the equivalent of taking about 500,000 fuel-based cars off the road for an entire year.”

Source: Google’s AI stoplight program is now calming traffic in a dozen cities worldwide

WHO Reccomends cheap malaria vaccine

The vaccine has been developed by the University of Oxford and is only the second malaria vaccine to be developed.

Malaria kills mostly babies and infants, and has been one of the biggest scourges on humanity.

There are already agreements in place to manufacture more than 100 million doses a year.

It has taken more than a century of scientific effort to develop effective vaccines against malaria.

The disease is caused by a complex parasite, which is spread by the bite of blood-sucking mosquitoes. It is far more sophisticated than a virus as it hides from our immune system by constantly shape-shifting inside the human body.

[…]

The WHO said the effectiveness of the two vaccines was “very similar” and there was no evidence one was better than the other.

However, the key difference is the ability to manufacture the University of Oxford vaccine – called R21 – at scale.

The world’s largest vaccine manufacturer – the Serum Institute of India – is already lined up to make more than 100 million doses a year and plans to scale up to 200 million doses a year.

So far there are only 18 million doses of RTS,S.

The WHO said the new R21 vaccine would be a “vital additional tool”. Each dose costs $2-4 (£1.65 to £3.30) and four doses are needed per person. That is about half the price of RTS,S.

[…]

That makes it hard to build up immunity naturally through catching malaria, and difficult to develop a vaccine against it.

It is almost two years to the day since the first vaccine – called RTS,S and developed by GSK – was backed by the WHO.

Source: Malaria vaccine big advance against major child killer – BBC News

Adobe previews AI upscaling to make blurry videos and GIFs look fresh

Adobe has developed an experimental AI-powered upscaling tool that greatly improves the quality of low-resolution GIFs and video footage. This isn’t a fully-fledged app or feature yet, and it’s not yet available for beta testing, but if the demonstrations seen by The Verge are anything to go by then it has some serious potential.

Adobe’s “Project Res-Up” uses diffusion-based upsampling technology (a class of generative AI that generates new data based on the data it’s trained on) to increase video resolution while simultaneously improving sharpness and detail.

In a side-by-side comparison that shows how the tool can upscale video resolution, Adobe took a clip from The Red House (1947) and upscaled it from 480 x 360 to 1280 x 960, increasing the total pixel count by 675 percent. The resulting footage was much sharper, with the AI removing most of the blurriness and even adding in new details like hair strands and highlights. The results still carried a slightly unnatural look (as many AI video and images do) but given the low initial video quality, it’s still an impressive leap compared to the upscaling on Nvidia’s TV Shield or Microsoft’s Video Super Resolution.

The footage below provided by Adobe matches what I saw in the live demonstration:

A clip from a black and white movie called The Red House (1947) featuring a young man and woman.
[Left: original, Right: upscaled] Running this clip from The Red House (1947) through Project Res-Up removes most of the blur and makes details like the character’s hair and eyes much sharper.Image: The Red House (1947) / United Artists / Adobe

Another demonstration showed a video being cropped to focus on a baby elephant, with the upscaling tool similarly boosting the low-resolution crop and eradicating most of the blur while also adding little details like skin wrinkles. It really does look as though the tool is sharpening low-contrast details that can’t be seen in the original footage. Impressively, the artificial wrinkles move naturally with the animal without looking overly artificial. Adobe also showed Project Res-Up upscaling GIFs to breathe some new life into memes you haven’t used since the days of MySpace.

A side-by-side comparison of baby elephant video footage.
[Left: original, Right: upscaled] Additional texture has been applied to this baby elephant to make the upscaled footage appear more natural and lifelike.Image: Adobe

The project will be revealed during the “Sneaks” section of the Adobe Max event later today, which the creative software giant uses to showcase future technologies and ideas that could potentially join Adobe’s product lineup. That means you won’t be able to try out Project Res-Up on your old family videos (yet) but its capabilities could eventually make their way into popular editing apps like Adobe Premiere Pro or Express. Previous Adobe Sneaks have since been released as apps and features, like Adobe Fresco and Photoshop’s content-aware tool.

Source: Adobe previews AI upscaling to make blurry videos and GIFs look fresh – The Verge

Climate crisis will make Europe’s beer cost more and taste worse

Climate breakdown is already changing the taste and quality of beer, scientists have warned.

The quantity and quality of hops, a key ingredient in most beers, is being affected by global heating, according to a study. As a result, beer may become more expensive and manufacturers will have to adapt their brewing methods.

Researchers forecast that hop yields in European growing regions will fall by 4-18% by 2050 if farmers do not adapt to hotter and drier weather, while the content of alpha acids in the hops, which gives beers their distinctive taste and smell, will fall by 20-31%.

“Beer drinkers will definitely see the climate change, either in the price tag or the quality,” said Miroslav Trnka, a scientist at the Global Change Research Institute of the Czech Academy of Sciences and co-author of the study, published in the journal Nature Communications. “That seems to be inevitable from our data.”

Beer, the third-most popular drink in the world after water and tea, is made by fermenting malted grains like barley with yeast. It is usually flavoured with aromatic hops grown mostly in the middle latitudes that are sensitive to changes in light, heat and water.

[…]

Source: Climate crisis will make Europe’s beer cost more and taste worse, say scientists | Europe | The Guardian

Microplastics detected in clouds hanging atop two Japanese mountains

[…]

The clouds around Japan’s Mount Fuji and Mount Oyama contain concerning levels of the tiny plastic bits, and highlight how the pollution can be spread long distances, contaminating the planet’s crops and water via “plastic rainfall”.

The plastic was so concentrated in the samples researchers collected that it is thought to be causing clouds to form while giving off greenhouse gasses.

“If the issue of ‘plastic air pollution’ is not addressed proactively, climate change and ecological risks may become a reality, causing irreversible and serious environmental damage in the future,” the study’s lead author, Hiroshi Okochi, a professor at Waseda University, said in a statement.

The peer-reviewed paper was published in Environmental Chemistry Letters, and the authors believe it is the first to check clouds for microplastics.

[…]

Waseda researchers gathered samples at altitudes ranging between 1,300-3,776 meters, which revealed nine types of polymers, like polyurethane, and one type of rubber. The cloud’s mist contained about 6.7 to 13.9 pieces of microplastics per litre, and among them was a large volume of “water loving” plastic bits, which suggests the pollution “plays a key role in rapid cloud formation, which may eventually affect the overall climate”, the authors wrote in a press release.

That is potentially a problem because microplastics degrade much faster when exposed to ultraviolet light in the upper atmosphere, and give off greenhouse gasses as they do. A high concentration of these microplastics in clouds in sensitive polar regions could throw off the ecological balance, the authors wrote.

The findings highlight how microplastics are highly mobile and can travel long distances through the air and environment. Previous research has found the material in rain, and the study’s authors say the main source of airborne plastics may be seaspray, or aerosols, that are released when waves crash or ocean bubbles burst. Dust kicked up by cars on roads is another potential source, the authors wrote.

Source: Microplastics detected in clouds hanging atop two Japanese mountains

New Fairy Circles Identified at Hundreds of Sites Worldwide

Round discs of dirt known as “fairy circles” mysteriously appear like polka dots on the ground that can spread out for miles. The origins of this phenomenon has intrigued scientists for decades, with recent research indicating that they may be more widespread than previously thought.

AI Model Used to Identify New Fairy Circles Worldwide N. Juergens:AAAS:Science
Fairy circles in NamibRand Nature Reserve in Namibia; Photo: N. Juergens/AAAS/Science

Fairy circles have previously been sighted only in Southern Africa’s Namid Desert and the outback of Western Australia. A new study was recently published which used artificial intelligence to identify vegetation patterns resembling fairy circles in hundreds of new locations across 15 countries on 3 continents.

Published in the journal Proceedings of the National Academy of Sciences, the new survey analyzed datasets containing high-resolution satellite images of drylands and arid ecosystems with scant rainfall from around the world.

Examining the new findings may help scientists understand fairy circles and the origins of their formations on a global scale. The researchers searched for patterns resembling fairy circles using a neural network or a type of AI that processes information in a manner that’s similar to the human brain.

“The use of artificial intelligence based models on satellite imagery is the first time it has been done on a large scale to detect fairy-circle like patterns,” said lead study author Dr. Emilio Guirado, a data scientist with the Multidisciplinary Institute for Environmental Studies at the University of Alicante in Spain.

Fairy Circles Identified at Sites Worldwide Courtesy Dr. Stephan Getzin
Drone flies over the NamibRand Nature Reserve; Photo: Dr. Stephan Getzin

The scientists first trained the neural network to recognize fairy circles by inputting more than 15,000 satellite images taken over Nambia and Australia. Then they provided the AI dataset with satellite views of nearly 575,000 plots of land worldwide, each measuring approximately 2.5 acres.

The neural network scanned vegetation in those images and identified repeating circular patterns that resembled fairy circles, evaluating the circles’ shapes, sizes, locations, pattern densities, and distribution. The output was then reviewed by humans to double-check the work of the neural network.

“We had to manually discard some artificial and natural structures that were not fairy circles based on photo-interpretation and the context of the area,” Guirado explained.

The results of the study showed 263 dryland locations that contained circular patterns similar to the fairy circles in Namibia and Australia. The spots were located in Africa, Madagascar, Midwestern Asia, and both central and Southwest Australia.

Researchers Discover New Fairy Circles Around the World Thomas Dressler:imageBROKER:Shutterstock
New fairy circles identified around the world; Photo: Dressler/imageBROKER/Shutterstock

The authors of the study also collected environmental data where the new circles were identified in hopes that this may indicate what causes them to form. They determined that fairy circle-like patterns were most likely to occur in dry, sandy soils that were high-alkaline and low in nitrogen.  They also found that these patterns helped stabilize ecosystems, increasing an area’s resistance to disturbances such as extreme droughts and floods.

There are many different theories among experts regarding the creation of fairy circles. They may be caused by certain climate conditions, self-organization in plants, insect activity, etc. The authors of the new study are optimistic that the new findings will help unlock the mysteries of this unique phenomenon.

Source: New Fairy Circles Identified at Hundreds of Sites Worldwide – TOMORROW’S WORLD TODAY®

Priming and Placebo effects shape how humans interact with AI

The preconceived notions people have about AI — and what they’re told before they use it — mold their experiences with these tools in ways researchers are beginning to unpack.

Why it matters: As AI seeps into medicine, news, politics, business and a host of other industries and services, human psychology gives the technology’s creators levers they can use to enhance users’ experiences — or manipulate them.

What they’re saying: “AI is only half of the human-AI interaction,” says Ruby Liu, a researcher at the MIT Media Lab.

  • The technology’s developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned,” says Pattie Maes, who directs the MIT Media Lab’s Fluid Interfaces Group.
  • “But we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don’t just depend on the AI and the quality of the AI. It depends on how the human responds to the AI,” she says.

What’s new: A pair of studies published this week looked at how much a person’s expectations about AI impacted their likelihood to trust it and take its advice.

A strong placebo effect works to shape what people think of a particular AI tool, one study revealed.

  • Participants who were about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive.
  • After using the chatbot, which is based on OpenAI’s generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who’d been told the AI had no motives said it didn’t. But they were all interacting with the same chatbot.
  • Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious.
  • That may be a reflection of humans’ positivity bias and that they may “want to evaluate [the AI] for themselves,” says Pat Pataranutaporn, a researcher at the MIT Media Lab and co-author of the new study published this week in Nature Machine Learning.
  • Participants who were told the chatbot was benevolent also said they perceived it to be more trustworthy, empathetic and effective than participants primed to believe it was neutral or manipulative.
  • The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn’t — there wasn’t an AI involved).

The intrigue: It wasn’t just people’s perceptions that were affected by their expectations.

  • Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who’d been told it was trying to manipulate them.

For some tasks, AI is perceived to be more objective and trustworthy — a perception that may cause people to prefer an algorithm’s advice.

  • In another study published this week in Scientific Reports, researchers found that preference can lead people to inherit an AI’s errors.
  • Psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, found that participants asked to perform a simulated medical diagnosis task with the help of an AI followed the AI’s advice, even when it was mistaken — and kept making those mistakes even after the AI was taken away.
  • “It is going to be very important that humans working with AI have not only the knowledge of how AI works … but also the time to oppose the advice of the AI — and the motivation to do it,” Matute says.

Yes, but: Both studies looked at one-off interactions between people and AI, and it’s unclear whether using a system day in and day out will change the effect the researchers describe.

The big picture: How people are introduced to AI and how it is depicted in pop culture, marketed and branded can be powerful determiners of how AI is adopted and ultimately valued, researchers said.

  • In previous work, the MIT Media Lab team showed that if someone has an “AI-generated virtual instructor” that looks like someone they admire, they are more motivated to learn and more likely to say the AI is a good teacher (even though their test scores didn’t necessarily improve).
  • Meta last month announced it was launching AI characters played by celebrities — like tennis star Naomi Osaka as an “anime-obsessed Sailor Senshi in training” and Tom Brady as a “wisecracking sports debater who pulls no punches.”
  • “There are just a lot of implications that come with the interface of an AI — how it’s portrayed, how it interacts with you, what it looks like, how it talks to you, what voice it has, what language it uses,” Maes says.

The placebo effect will likely be a “big challenge in the future,” says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.

What to watch: The studies point to the possible power of priming people to have lower expectations of AI — but maybe only so far.

  • A practical lesson is “we should err on the side of portraying these systems and talking about these systems as not completely correct or accurate … so that people come with an attitude of ‘I’m going to make up my own mind about this system,'” Maes says.

Source: Placebo effect shapes how we see AI

News organizations blocking OpenAI

Ben Welsh has a running list of the news organizations blocking OpenAI crawlers:

In total, 532 of 1,147 news publishers surveyed by the homepages.news archive have instructed OpenAI, Google AI or the non-profit Common Crawl to stop scanning their sites, which amounts to 46.4% of the sample.

The three organizations systematically crawl web sites to gather the information that fuels generative chatbots like OpenAI’s ChatGPT and Google’s Bard. Publishers can request that their content be excluded by opting out via the robots.txt convention.

Source: News organizations blocking OpenAI

Which reduces the value of AIs. It used to be the web was open for all, with information you could use as you liked. News organisations often fail to see value in AI but are scared that their jobs will be taken by AIs instead of enhanced. So they try to wreck the AIs, a bit like saboteurs and luddites. A real impediment to growth

Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns

[…]

the informal nature of their collections means that they are exposed to serious threats from copyright, as the recent experience of The Museum of Classic Chicago Television makes clear. The Museum explains why it exists:

The Museum of Classic Chicago Television (FuzzyMemoriesTV) is constantly searching out vintage material on old videotapes saved in basements or attics, or sold at flea markets, garage sales, estate sales and everywhere in between. Some of it would be completely lost to history if it were not for our efforts. The local TV stations have, for the most part, regrettably done a poor job at preserving their history. Tapes were very expensive 25-30 years ago and there also was a lack of vision on the importance of preserving this material back then. If the material does not exist on a studio master tape, what is to be done? Do we simply disregard the thousands of off-air recordings that still exist holding precious “lost” material? We believe this would be a tragic mistake.

Dozens of TV professionals and private individuals have donated to the museum their personal copies of old TV programmes made in the 1970s and 1980s, many of which include rare and otherwise unavailable TV advertisements that were shown as part of the broadcasts. In addition to the main Museum of Classic Chicago Television site, there is also a YouTube channel with videos. However, as TorrentFreak recounts, the entire channel was under threat because of copyright takedown requests:

In a series of emails starting Friday and continuing over the weekend, [the museum’s president and lead curator] Klein began by explaining his team’s predicament, one that TorrentFreak has heard time and again over the past few years. Acting on behalf of a copyright owner, in this case Sony, India-based anti-piracy company Markscan hit the MCCTv channel with a flurry of copyright claims. If these cannot be resolved, the entire project may disappear.

One issue is that Klein was unable to contact Markscan to resolve the problem directly. He is quoted by TorrentFreak as saying: “I just need to reach a live human being to try to resolve this without copyright strikes. I am willing to remove the material manually to get the strikes reversed.”

Once the copyright enforcement machine is engaged, it can be hard to stop. As Walled Culture the book (free digital versions available) recounts, there are effectively no penalties for unreasonable or even outright false claims. The playing field is tipped entirely in the favour of the copyright world, and anyone that is targeted using one of the takedown mechanisms is unlikely to be able to do much to contest them, unless they have good lawyers and deep pockets. Fortunately, in this case, an Ars Technica article on the issue reported that:

Sony’s copyright office emailed Klein after this article was published, saying it would “inform MarkScan to request retractions for the notices issued in response to the 27 full-length episode postings of Bewitched” in exchange for “assurances from you that you or the Fuzzy Memories TV Channel will not post or re-post any infringing versions from Bewitched or other content owned or distributed by SPE [Sony Pictures Entertainment] companies.”

That “concession” by Sony highlights the main problem here: the fact that a group of public-spirited individuals trying to preserve unique digital artefacts must live with the constant threat of copyright companies taking action against them. Moreover, there is also the likelihood that some of their holdings will have to be deleted as a result of those legal threats, despite the material’s possible cultural value or the fact that it is the only surviving copy. No one wins in this situation, but the purity of copyright must be preserved at all costs, it seems.

[…]

Source: Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns | Techdirt

MGM Resorts cyberattack to cost $100 million

MGM Resorts has admitted that the cyberattack it suffered in September will likely cost the company at least $100 million.

The effects of the attack are expected to make a substantial dent in the entertainment giant’s third-quarter earnings and still have a noticeable impact in its Q4 too, although this is predicted to be “minimal.”

According to an 8K filing with the Securities and Exchange Commission (SEC) on Thursday, MGM Resorts said less than $10 million has also been spent on “one-time expenses” such as legal and consultancy fees, and the cost of bringing in third-party experts to handle the incident response.

These are the current estimates for the total costs incurred by the attack, which took slot machines to the sword and borked MGM’s room-booking systems, among other things, but the company admitted the full scope of costs has yet to be determined.

The good news is that MGM expects its cyber insurance policy to cover the financial impact of the attack.

The company also expects to fill its rooms to near-normal levels starting this month. September’s occupancy levels took a hit – 88 percent full compared to 93 percent at the same time last year – but October’s occupancy is forecast to be down just 1 percent and November is poised to deliver record numbers thanks to the Las Vegas Formula 1 event.

[…]

MGM Resorts confirmed personal data belonging to customers had been stolen during the course of the intrusion. Those who became customers before March 2019 may be affected.

Stolen data includes social security numbers, driving license numbers, passport numbers, and contact details such as names, phone numbers, email addresses, postal addresses, as well as gender and dates of birth.

At this time, there is no evidence to suggest that financial information including bank numbers and cards were compromised, and passwords are also believed to be unaffected.

[…]

Adam Marrè, CISO at cybersecurity outfit Arctic Wolf, told The Register: “When looking at the total cost of a breach, such as the one which impacted MGM, many factors can be taken into account. This can include a combination of revenue lost for downtime, extra hours worked for remediation, tools that may have been purchased to deal with the issue, outside incident response help, setting up and operating a hotline for affected people, fixing affected equipment, purchasing credit monitoring, and sending physical letters to victims. Even hiring an outside PR firm to help with crisis messaging. When you add up everything, $100 million does not sounds like an unrealistic number for organization like MGM.

[…]

Source: MGM Resorts cyberattack to cost $100 million • The Register

23andMe DNA site scraping incident leaked data on 1.3 million users

Genetic testing giant 23andMe confirmed that a data scraping incident resulted in hackers gaining access to sensitive user information and selling it on the dark web.

The information of nearly 7 million 23andMe users was offered for sale on a cybercriminal forum this week. The information included origin estimation, phenotype, health information, photos, identification data and more. 23andMe processes saliva samples submitted by customers to determine their ancestry.

When asked about the post, the company initially denied that the information was legitimate, calling it a “misleading claim” in a statement to Recorded Future News.

The company later said it was aware that certain 23andMe customer profile information was compiled through unauthorized access to individual accounts that were signed up for the DNA Relative feature — which allows users to opt in for the company to show them potential matches for relatives.

[…]

When pressed on how compromising a handful of user accounts would give someone access to millions of users, the spokesperson said the company does not believe the threat actor had access to all of the accounts but rather gained unauthorized entry to a much smaller number of 23andMe accounts and scraped data from their DNA Relative matches.

The spokesperson declined to confirm the specific number of customer accounts affected.

Anyone who has opted into DNA Relatives can view basic profile information of others who make their profiles visible to DNA Relative participants, a spokesperson said.

Users who are genetically related can access ancestry information, which is made clear to users when they create their DNA Relatives profile, the spokesperson added.

[…]

A researcher approached Recorded Future News after examining the leaked database and found that much of it looked real. The researcher spoke on condition of anonymity because he found the information of his wife and several of her family members in the leaked data set. He also found other acquaintances and verified that their information was accurate.

The researcher downloaded two files from the BreachForums post and found that one had information on 1 million 23andMe users of Ashkenazi heritage. The other file included data on more than 300,000 users of Chinese heritage.

The data included profile and account ID numbers, names, gender, birth year, maternal and paternal genetic markers, ancestral heritage results, and data on whether or not each user has opted into 23andme’s health data.

“It appears the information has been scraped from user profiles which are only supposed to be shared between DNA Matches. So although this particular leak does not contain genomic sequencing data, it’s still data that should not be available to the public,” the researcher said.

“23andme seems to think this isn’t a big deal. They keep telling me that if I don’t want this info to be shared, I should not opt into the DNA relatives feature. But that’s dismissing the importance of this data which should only be viewable to DNA relatives, not the public. And the fact that someone was able to scrape this data from 1.3 million users is concerning. The hacker allegedly has more data that they have not released yet.”

The researcher added that he discovered another issue where someone could enter a 23andme profile ID, like the ones included in the leaked data set, into their URL and see someone’s profile.

The data available through this only includes profile photos, names, birth years and location but does not include test results.

“It’s very concerning that 23andme has such a big loophole in their website design and security where they are just freely exposing peoples info just by typing a profile ID into the URL. Especially for a website that deals with people’s genetic data and personal information. What a botch job by the company,” the researcher said.

[…]

The security policies of genetic testing companies like 23andMe have faced scrutiny from regulators in recent weeks. Three weeks ago, genetic testing firm 1Health.io agreed to pay the Federal Trade Commission (FTC) a $75,000 fine to resolve allegations that it failed to secure sensitive genetic and health data, retroactively overhauled its privacy policy without notifying and obtaining consent from customers whose data it had obtained, and tricked customers about their ability to delete their data.

Source: 23andMe scraping incident leaked data on 1.3 million users of Ashkenazi and Chinese descent

ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI.

[…]

I completely understand why some authors are extremely upset about finding out that their works were used to train AI. It feels wrong. It feels exploitive. (I do not understand their lawsuits, because I think they’re very much confused about how copyright law works. )

But, to me, many of the complaints about this amount to a similar discussion to ones we’ve had in the past, regarding concerns about if works were released without copyright, what would happen if someone “bad” reused them. This sort of thought experiment is silly, because once a work is released and enters the messy real world, it’s entirely possible for things to happen that the original creator disagrees with or hates. Someone can interpret the work in ridiculous ways. Or it can inspire bad people to do bad things. Or any of a long list of other possibilities.

The original author has the right to speak up about the bad things, or to denounce the bad people, but the simple fact is that once you’ve released a work into the world, the original author no longer has control over how that work is used and interpreted by the world. Releasing a work into the world is an act of losing control over that work and what others can do in response to it. Or how or why others are inspired by it.

But, when it comes to the AI fights, many are insisting that they want to do exactly that around AI, and much of this came to a head recently when The Atlantic released a tool that allowed anyone to search to see which authors were included in the Books3 dataset (one of multiple collections of books that have been used to train AI). This lead to a lot of people (both authors and non-authors) screaming about the evils of AI, and about how wrong it was that such books were included.

But, again, that’s the nature of releasing a work to the public. People read it. Machines might also read it. And they might use what they learn in that work to do something else. And you might like that and you might not, but it’s not really your call.

That’s why I was happy to see Ian Bogost publish an article explaining why he’s happy that his books were found in Books3, saying what those two other authors I spoke to wouldn’t say publicly. Ian is getting screamed at all over social media for this article, with most of it apparently based on the title and not on the substance. But it’s worth reading.

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

But internet culture recasts permission as a moral right. Many authors are online, and they can tell you if and when you’re wrong about their work. Also online are swarms of fans who will evangelize their received ideas of what a book, a movie, or an album really means and snuff out the “wrong” accounts. The Books3 imbroglio reflects the same impulse to believe that some interpretations of a work are out of bounds.

Perhaps Meta is an unappealing reader. Perhaps chopping prose into tokens is not how I would like to be read. But then, who am I to say what my work is good for, how it might benefit someone—even a near-trillion-dollar company? To bemoan this one unexpected use for my writing is to undermine all of the other unexpected uses for it. Speaking as a writer, that makes me feel bad.

More importantly, Bogost notes that the entire point of Books3 originally was to make sure that AI wasn’t just controlled by corporate juggernauts:

The Books3 database was itself uploaded in resistance to the corporate juggernauts. The person who first posted the repository has described it as the only way for open-source, grassroots AI projects to compete with huge commercial enterprises. He was trying to return some control of the future to ordinary people, including book authors. In the meantime, Meta contends that the next generation of its AI model—which may or may not still include Books3 in its training data—is “free for research and commercial use,” a statement that demands scrutiny but also complicates this saga. So does the fact that hours after The Atlantic published a search tool for Books3, one writer distributed a link that allows you to access the feature without subscribing to this magazine. In other words: a free way for people to be outraged about people getting writers’ work for free.

I’m not sure what I make of all this, as a citizen of the future no less than as a book author. Theft is an original sin of the internet. Sometimes we call it piracy (when software is uploaded to USENET, or books to Books3); other times it’s seen as innovation (when Google processed and indexed the entire internet without permission) or even liberation. AI merely iterates this ambiguity. I’m having trouble drawing any novel or definitive conclusions about the Books3 story based on the day-old knowledge that some of my writing, along with trillions more chunks of words from, perhaps, Amazon reviews and Reddit grouses, have made their way into an AI training set.

I get that it feels bad that your works are being used in ways you disapprove of, but that is the nature of releasing something into the world. And the underlying point of the Books3 database is to spread access to information to everyone. And that’s a good thing that should be supported, in the nature of folks like Aaron Swartz.

It’s the same reason why, even as lots of news sites are proactively blocking AI scanning bots, I’m actually hoping that more of them will scan and use Techdirt’s words to do more and to be better. The more information shared, the more we can do with it, and that’s a good thing.

I understand the underlying concerns, but that’s just part of what happens when you release a work to the world. Part of releasing something into the world is coming to terms with the fact that you no longer own how people will read it or be inspired by it, or what lessons they will take from it.

 

Source: Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI. | Techdirt

JuMBOs Planets – but without stars to orbit. So not planets according to definition

A team of astronomers have detected over 500 planet-like objects in the inner Orion Nebula and the Trapezium Cluster that they believe could shake up the very definition of a planet.

The 4-light-year-wide Trapezium Cluster sits at the heart of the Orion Nebula, or Messier 42, about 1,400 light-years from Earth. The cluster is filled with young stars, which make their surrounding gas and dust glow with infrared light.

The Webb Space Telescope’s Near-Infrared Camera (NIRCam) observed the nebula at short and long wavelengths for nearly 35 hours between September 26, 2022, and October 2, 2022, giving researchers a remarkably sharp look at relatively small (meaning Jupiter-sized and smaller) isolated objects in the nebula. These NIRCam images are some of the largest mosaics from the telescope to date, according to a European Space Agency release. Though they cannot be hosted in all their resolved glory on this site, you can check them out on the ESASky application.

A planet, per NASA, is an object that orbits a star and is large enough to have taken on a spherical shape and have cast away other objects near its size from its orbit. According to the recent team, the Jupiter-mass binary objects (or JuMBOs) are big enough to be planetary but don’t have a star they’re clearly orbiting. Using Webb, the researchers also observed low-temperature planetary-mass objects (or PMOs). The team’s results are yet to be peer-reviewed but are currently hosted on the preprint server arXiv.

[…]

In the preprint, the team describes 540 planetary mass candidates, with the smallest masses clocking in at about 0.6 times the mass of Jupiter. According to The Guardian, analysis revealed steam and methane in the JuMBOs’ atmospheres. The researchers also found that 9% of those objects are in wide binaries, equivalent to 100 times the distance between Earth and the Sun or more. That finding is perplexing, because objects of JuMBOs’ masses typically orbit a star. In other words, the JuMBOs look decidedly planet-like but lack a key characteristic of planets.

[…]

So what are the JuMBOs? It’s still not clear whether the objects form like planets—by accreting the gas and dust from a protoplanetary disk following a star’s formation—or more like the stars themselves. The Trapezium Cluster’s stars are quite young; according to the STScI release, if our solar system were a middle-aged person, the cluster’s stars would be just three or four days old. It’s possible that objects like the JuMBOs are actually common in the universe, but Webb is the first observatory that has the ability to pick out the individual objects.

[…]

Source: Quasi-Planets Called JuMBOs Are Bopping Around in Space

Arm patches Mali GPU driver bug exploited by spyware

Commercial spyware has exploited a security hole in Arm’s Mali GPU drivers to compromise some people’s devices, according to Google today.

These graphics processors are used in a ton of gear, from phones and tablets to laptops and cars, so the kernel-level vulnerability may be present in countless equipment. This includes Android handsets made by Google, Samsung, and others.

The vulnerable drivers are paired with Arm’s Midgard (launched in 2010), Bifrost (2016), Valhall (2019), and fifth generation Mali GPUs (2023), so we imagine this buggy code will be in millions of systems.

On Monday, Arm issued an advisory for the flaw, which is tracked as CVE-2023-4211. This is a use-after-free bug affecting Midgard driver versions r12p0 to r32p0; Bifrost versions r0p0 to r42p0; Valhall versions r19p0 to r42p0; and Arm 5th Gen GPU Architecture versions r41p0 to r42p0.

We’re told Arm has corrected the security blunder in its drivers for Bifrost to fifth-gen. “This issue is fixed in Bifrost, Valhall, and Arm 5th Gen GPU Architecture Kernel Driver r43p0,” the advisory stated. “Users are recommended to upgrade if they are impacted by this issue. Please contact Arm support for Midgard GPUs.”

We note version r43p0 of Arm’s open source Mali drivers for Bifrost to fifth-gen were released in March. Midgard has yet to publicly get that version, it appears, hence why you need to contact Arm for that. We’ve asked Arm for more details on that.

What this means for the vast majority of people is: look out for operating system or manufacturer updates with Mali GPU driver fixes to install to close this security hole, or look up the open source drivers and apply updates yourself if you’re into that. Your equipment may already be patched by now, given the release in late March, and details of the bug are only just coming out. If you’re a device maker, you should be rolling out patches to customers.

“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” is how Arm described the bug. That, it seems, is enough to allow spyware to take hold of a targeted vulnerable device.

According to Arm there is “evidence that this vulnerability may be under limited, targeted exploitation.” We’ve received confirmation from Google, whose Threat Analysis Group’s (TAG) Maddie Stone and Google Project Zero’s Jann Horn found and reported the vulnerability to the chip designer, that this targeted exploitation has indeed taken place.

“At this time, TAG can confirm the CVE was used in the wild by a commercial surveillance vendor,” a TAG spokesperson told The Register. “More technical details will be available at a later date, aligning with our vulnerability disclosure policy.”

[…]

 

Source: Arm patches Mali GPU driver bug exploited by spyware • The Register

Amazon Used Secret ‘Project Nessie’ Algorithm To Raise Price

Amazon used an algorithm code-named “Project Nessie” to test how much it could raise prices in a way that competitors would follow, according to redacted portions of the Federal Trade Commission’s monopoly lawsuit against the company. From a report: The algorithm helped Amazon improve its profit on items across shopping categories, and because of the power the company has in e-commerce, led competitors to raise their prices and charge customers more, according to people familiar with the allegations in the complaint. In instances where competitors didn’t raise their prices to Amazon’s level, the algorithm — which is no longer in use — automatically returned the item to its normal price point.

The company also used Nessie on what employees saw as a promotional spiral, where Amazon would match a discounted price from a competitor, such as Target.com, and other competitors would follow, lowering their prices. When Target ended its sale, Amazon and the other competitors would remain locked at the low price because they were still matching each other, according to former employees who worked on the algorithm and pricing team. The algorithm helped Amazon recoup money and improve margins. The FTC’s lawsuit redacted an estimate of how much it alleges the practice “extracted from American households,” and it also says it helped the company generate a redacted amount of “excess profit.” Amazon made more than $1 billion in revenue through use of the algorithm, according to a person familiar with the matter. Amazon stopped using the algorithm in 2019, some of the people said. It wasn’t clear why the company stopped using it.

Source: Amazon Used Secret ‘Project Nessie’ Algorithm To Raise Prices – Slashdot

radio-browser.info – a huge list of online radio streams + apps that use the list

What can radio-browser do for you?

I want to listen to radio
Please have a look at the list of apps that use this service by clicking on “Apps” in the header bar. You can also just use the search field on this webpage to find streams you want to listen to. Maybe you want a list of the most clicked streams of this service?

I want to add a stream to the database
Just click “New station” and add the stream. This service is completely automatic. More information in the FAQ. Streams CANNOT be changed at the moment by users.

I am the owner of a stream
You can add your stream. Streams can only be changed at the moment by the owner. Please follow the tutorial if you want to change your stream.

I am an app developer
Have a look at the API documentation at api.radio-browser.info

Source: radio-browser.info