Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

IBM chip speeds up AI by combining processing and memory in the core

 

Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”

NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.

[…]

NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.

The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.

[…]

NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.

Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”

[…]

Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.

Source: ‘Mind-blowing’ IBM chip speeds up AI

Google’s AI stoplight program leads to less stops, less emissions

It’s been two years since Google first debuted Project Green Light, a novel means of addressing the street-level pollution caused by vehicles idling at stop lights.

[…]

Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there.

[…]

When the program was first announced in 2021, it had only been pilot tested in four intersections in Israel in partnership with the Israel National Roads Company but Google had reportedly observed a “10 to 20 percent reduction in fuel and intersection delay time” during those tests. The pilot program has grown since then, spreading to a dozen partner cities around the world, including Rio de Janeiro, Brazil; Manchester, England and Jakarta, Indonesia.

“Today we’re happy to share that… we plan to scale to more cities in 2024,” Yael Maguire, Google VP of Geo Sustainability, told reporters during a pre-brief event last week. “Early numbers indicate a potential for us to see a 30 percent reduction in stops.

[…]

“Our AI recommendations work with existing infrastructure and traffic systems,” Maguire continued. “City engineers are able to monitor the impact and see results within weeks.” Maguire also noted that the Manchester test reportedly saw improvements to emission levels and air quality rise by as much as 18 percent. The company also touted the efficacy of its Maps routing in reducing emissions, with Maguire pointing out at it had “helped prevent more than 2.4 million metric tons of carbon emissions — the equivalent of taking about 500,000 fuel-based cars off the road for an entire year.”

Source: Google’s AI stoplight program is now calming traffic in a dozen cities worldwide

Adobe previews AI upscaling to make blurry videos and GIFs look fresh

Adobe has developed an experimental AI-powered upscaling tool that greatly improves the quality of low-resolution GIFs and video footage. This isn’t a fully-fledged app or feature yet, and it’s not yet available for beta testing, but if the demonstrations seen by The Verge are anything to go by then it has some serious potential.

Adobe’s “Project Res-Up” uses diffusion-based upsampling technology (a class of generative AI that generates new data based on the data it’s trained on) to increase video resolution while simultaneously improving sharpness and detail.

In a side-by-side comparison that shows how the tool can upscale video resolution, Adobe took a clip from The Red House (1947) and upscaled it from 480 x 360 to 1280 x 960, increasing the total pixel count by 675 percent. The resulting footage was much sharper, with the AI removing most of the blurriness and even adding in new details like hair strands and highlights. The results still carried a slightly unnatural look (as many AI video and images do) but given the low initial video quality, it’s still an impressive leap compared to the upscaling on Nvidia’s TV Shield or Microsoft’s Video Super Resolution.

The footage below provided by Adobe matches what I saw in the live demonstration:

A clip from a black and white movie called The Red House (1947) featuring a young man and woman.
[Left: original, Right: upscaled] Running this clip from The Red House (1947) through Project Res-Up removes most of the blur and makes details like the character’s hair and eyes much sharper.Image: The Red House (1947) / United Artists / Adobe

Another demonstration showed a video being cropped to focus on a baby elephant, with the upscaling tool similarly boosting the low-resolution crop and eradicating most of the blur while also adding little details like skin wrinkles. It really does look as though the tool is sharpening low-contrast details that can’t be seen in the original footage. Impressively, the artificial wrinkles move naturally with the animal without looking overly artificial. Adobe also showed Project Res-Up upscaling GIFs to breathe some new life into memes you haven’t used since the days of MySpace.

A side-by-side comparison of baby elephant video footage.
[Left: original, Right: upscaled] Additional texture has been applied to this baby elephant to make the upscaled footage appear more natural and lifelike.Image: Adobe

The project will be revealed during the “Sneaks” section of the Adobe Max event later today, which the creative software giant uses to showcase future technologies and ideas that could potentially join Adobe’s product lineup. That means you won’t be able to try out Project Res-Up on your old family videos (yet) but its capabilities could eventually make their way into popular editing apps like Adobe Premiere Pro or Express. Previous Adobe Sneaks have since been released as apps and features, like Adobe Fresco and Photoshop’s content-aware tool.

Source: Adobe previews AI upscaling to make blurry videos and GIFs look fresh – The Verge

New Fairy Circles Identified at Hundreds of Sites Worldwide

Round discs of dirt known as “fairy circles” mysteriously appear like polka dots on the ground that can spread out for miles. The origins of this phenomenon has intrigued scientists for decades, with recent research indicating that they may be more widespread than previously thought.

AI Model Used to Identify New Fairy Circles Worldwide N. Juergens:AAAS:Science
Fairy circles in NamibRand Nature Reserve in Namibia; Photo: N. Juergens/AAAS/Science

Fairy circles have previously been sighted only in Southern Africa’s Namid Desert and the outback of Western Australia. A new study was recently published which used artificial intelligence to identify vegetation patterns resembling fairy circles in hundreds of new locations across 15 countries on 3 continents.

Published in the journal Proceedings of the National Academy of Sciences, the new survey analyzed datasets containing high-resolution satellite images of drylands and arid ecosystems with scant rainfall from around the world.

Examining the new findings may help scientists understand fairy circles and the origins of their formations on a global scale. The researchers searched for patterns resembling fairy circles using a neural network or a type of AI that processes information in a manner that’s similar to the human brain.

“The use of artificial intelligence based models on satellite imagery is the first time it has been done on a large scale to detect fairy-circle like patterns,” said lead study author Dr. Emilio Guirado, a data scientist with the Multidisciplinary Institute for Environmental Studies at the University of Alicante in Spain.

Fairy Circles Identified at Sites Worldwide Courtesy Dr. Stephan Getzin
Drone flies over the NamibRand Nature Reserve; Photo: Dr. Stephan Getzin

The scientists first trained the neural network to recognize fairy circles by inputting more than 15,000 satellite images taken over Nambia and Australia. Then they provided the AI dataset with satellite views of nearly 575,000 plots of land worldwide, each measuring approximately 2.5 acres.

The neural network scanned vegetation in those images and identified repeating circular patterns that resembled fairy circles, evaluating the circles’ shapes, sizes, locations, pattern densities, and distribution. The output was then reviewed by humans to double-check the work of the neural network.

“We had to manually discard some artificial and natural structures that were not fairy circles based on photo-interpretation and the context of the area,” Guirado explained.

The results of the study showed 263 dryland locations that contained circular patterns similar to the fairy circles in Namibia and Australia. The spots were located in Africa, Madagascar, Midwestern Asia, and both central and Southwest Australia.

Researchers Discover New Fairy Circles Around the World Thomas Dressler:imageBROKER:Shutterstock
New fairy circles identified around the world; Photo: Dressler/imageBROKER/Shutterstock

The authors of the study also collected environmental data where the new circles were identified in hopes that this may indicate what causes them to form. They determined that fairy circle-like patterns were most likely to occur in dry, sandy soils that were high-alkaline and low in nitrogen.  They also found that these patterns helped stabilize ecosystems, increasing an area’s resistance to disturbances such as extreme droughts and floods.

There are many different theories among experts regarding the creation of fairy circles. They may be caused by certain climate conditions, self-organization in plants, insect activity, etc. The authors of the new study are optimistic that the new findings will help unlock the mysteries of this unique phenomenon.

Source: New Fairy Circles Identified at Hundreds of Sites Worldwide – TOMORROW’S WORLD TODAY®

Priming and Placebo effects shape how humans interact with AI

The preconceived notions people have about AI — and what they’re told before they use it — mold their experiences with these tools in ways researchers are beginning to unpack.

Why it matters: As AI seeps into medicine, news, politics, business and a host of other industries and services, human psychology gives the technology’s creators levers they can use to enhance users’ experiences — or manipulate them.

What they’re saying: “AI is only half of the human-AI interaction,” says Ruby Liu, a researcher at the MIT Media Lab.

  • The technology’s developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned,” says Pattie Maes, who directs the MIT Media Lab’s Fluid Interfaces Group.
  • “But we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don’t just depend on the AI and the quality of the AI. It depends on how the human responds to the AI,” she says.

What’s new: A pair of studies published this week looked at how much a person’s expectations about AI impacted their likelihood to trust it and take its advice.

A strong placebo effect works to shape what people think of a particular AI tool, one study revealed.

  • Participants who were about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive.
  • After using the chatbot, which is based on OpenAI’s generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who’d been told the AI had no motives said it didn’t. But they were all interacting with the same chatbot.
  • Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious.
  • That may be a reflection of humans’ positivity bias and that they may “want to evaluate [the AI] for themselves,” says Pat Pataranutaporn, a researcher at the MIT Media Lab and co-author of the new study published this week in Nature Machine Learning.
  • Participants who were told the chatbot was benevolent also said they perceived it to be more trustworthy, empathetic and effective than participants primed to believe it was neutral or manipulative.
  • The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn’t — there wasn’t an AI involved).

The intrigue: It wasn’t just people’s perceptions that were affected by their expectations.

  • Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who’d been told it was trying to manipulate them.

For some tasks, AI is perceived to be more objective and trustworthy — a perception that may cause people to prefer an algorithm’s advice.

  • In another study published this week in Scientific Reports, researchers found that preference can lead people to inherit an AI’s errors.
  • Psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, found that participants asked to perform a simulated medical diagnosis task with the help of an AI followed the AI’s advice, even when it was mistaken — and kept making those mistakes even after the AI was taken away.
  • “It is going to be very important that humans working with AI have not only the knowledge of how AI works … but also the time to oppose the advice of the AI — and the motivation to do it,” Matute says.

Yes, but: Both studies looked at one-off interactions between people and AI, and it’s unclear whether using a system day in and day out will change the effect the researchers describe.

The big picture: How people are introduced to AI and how it is depicted in pop culture, marketed and branded can be powerful determiners of how AI is adopted and ultimately valued, researchers said.

  • In previous work, the MIT Media Lab team showed that if someone has an “AI-generated virtual instructor” that looks like someone they admire, they are more motivated to learn and more likely to say the AI is a good teacher (even though their test scores didn’t necessarily improve).
  • Meta last month announced it was launching AI characters played by celebrities — like tennis star Naomi Osaka as an “anime-obsessed Sailor Senshi in training” and Tom Brady as a “wisecracking sports debater who pulls no punches.”
  • “There are just a lot of implications that come with the interface of an AI — how it’s portrayed, how it interacts with you, what it looks like, how it talks to you, what voice it has, what language it uses,” Maes says.

The placebo effect will likely be a “big challenge in the future,” says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.

What to watch: The studies point to the possible power of priming people to have lower expectations of AI — but maybe only so far.

  • A practical lesson is “we should err on the side of portraying these systems and talking about these systems as not completely correct or accurate … so that people come with an attitude of ‘I’m going to make up my own mind about this system,'” Maes says.

Source: Placebo effect shapes how we see AI

News organizations blocking OpenAI

Ben Welsh has a running list of the news organizations blocking OpenAI crawlers:

In total, 532 of 1,147 news publishers surveyed by the homepages.news archive have instructed OpenAI, Google AI or the non-profit Common Crawl to stop scanning their sites, which amounts to 46.4% of the sample.

The three organizations systematically crawl web sites to gather the information that fuels generative chatbots like OpenAI’s ChatGPT and Google’s Bard. Publishers can request that their content be excluded by opting out via the robots.txt convention.

Source: News organizations blocking OpenAI

Which reduces the value of AIs. It used to be the web was open for all, with information you could use as you liked. News organisations often fail to see value in AI but are scared that their jobs will be taken by AIs instead of enhanced. So they try to wreck the AIs, a bit like saboteurs and luddites. A real impediment to growth

Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI.

[…]

I completely understand why some authors are extremely upset about finding out that their works were used to train AI. It feels wrong. It feels exploitive. (I do not understand their lawsuits, because I think they’re very much confused about how copyright law works. )

But, to me, many of the complaints about this amount to a similar discussion to ones we’ve had in the past, regarding concerns about if works were released without copyright, what would happen if someone “bad” reused them. This sort of thought experiment is silly, because once a work is released and enters the messy real world, it’s entirely possible for things to happen that the original creator disagrees with or hates. Someone can interpret the work in ridiculous ways. Or it can inspire bad people to do bad things. Or any of a long list of other possibilities.

The original author has the right to speak up about the bad things, or to denounce the bad people, but the simple fact is that once you’ve released a work into the world, the original author no longer has control over how that work is used and interpreted by the world. Releasing a work into the world is an act of losing control over that work and what others can do in response to it. Or how or why others are inspired by it.

But, when it comes to the AI fights, many are insisting that they want to do exactly that around AI, and much of this came to a head recently when The Atlantic released a tool that allowed anyone to search to see which authors were included in the Books3 dataset (one of multiple collections of books that have been used to train AI). This lead to a lot of people (both authors and non-authors) screaming about the evils of AI, and about how wrong it was that such books were included.

But, again, that’s the nature of releasing a work to the public. People read it. Machines might also read it. And they might use what they learn in that work to do something else. And you might like that and you might not, but it’s not really your call.

That’s why I was happy to see Ian Bogost publish an article explaining why he’s happy that his books were found in Books3, saying what those two other authors I spoke to wouldn’t say publicly. Ian is getting screamed at all over social media for this article, with most of it apparently based on the title and not on the substance. But it’s worth reading.

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

But internet culture recasts permission as a moral right. Many authors are online, and they can tell you if and when you’re wrong about their work. Also online are swarms of fans who will evangelize their received ideas of what a book, a movie, or an album really means and snuff out the “wrong” accounts. The Books3 imbroglio reflects the same impulse to believe that some interpretations of a work are out of bounds.

Perhaps Meta is an unappealing reader. Perhaps chopping prose into tokens is not how I would like to be read. But then, who am I to say what my work is good for, how it might benefit someone—even a near-trillion-dollar company? To bemoan this one unexpected use for my writing is to undermine all of the other unexpected uses for it. Speaking as a writer, that makes me feel bad.

More importantly, Bogost notes that the entire point of Books3 originally was to make sure that AI wasn’t just controlled by corporate juggernauts:

The Books3 database was itself uploaded in resistance to the corporate juggernauts. The person who first posted the repository has described it as the only way for open-source, grassroots AI projects to compete with huge commercial enterprises. He was trying to return some control of the future to ordinary people, including book authors. In the meantime, Meta contends that the next generation of its AI model—which may or may not still include Books3 in its training data—is “free for research and commercial use,” a statement that demands scrutiny but also complicates this saga. So does the fact that hours after The Atlantic published a search tool for Books3, one writer distributed a link that allows you to access the feature without subscribing to this magazine. In other words: a free way for people to be outraged about people getting writers’ work for free.

I’m not sure what I make of all this, as a citizen of the future no less than as a book author. Theft is an original sin of the internet. Sometimes we call it piracy (when software is uploaded to USENET, or books to Books3); other times it’s seen as innovation (when Google processed and indexed the entire internet without permission) or even liberation. AI merely iterates this ambiguity. I’m having trouble drawing any novel or definitive conclusions about the Books3 story based on the day-old knowledge that some of my writing, along with trillions more chunks of words from, perhaps, Amazon reviews and Reddit grouses, have made their way into an AI training set.

I get that it feels bad that your works are being used in ways you disapprove of, but that is the nature of releasing something into the world. And the underlying point of the Books3 database is to spread access to information to everyone. And that’s a good thing that should be supported, in the nature of folks like Aaron Swartz.

It’s the same reason why, even as lots of news sites are proactively blocking AI scanning bots, I’m actually hoping that more of them will scan and use Techdirt’s words to do more and to be better. The more information shared, the more we can do with it, and that’s a good thing.

I understand the underlying concerns, but that’s just part of what happens when you release a work to the world. Part of releasing something into the world is coming to terms with the fact that you no longer own how people will read it or be inspired by it, or what lessons they will take from it.

 

Source: Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI. | Techdirt

Microsoft is going nuclear to power its AI ambitions

Microsoft thinks next-generation nuclear reactors can power its data centers and AI ambitions, according to a job listing for a principal program manager who’ll lead the company’s nuclear energy strategy.

Data centers already use a hell of a lot of electricity, which could thwart the company’s climate goals unless it can find clean sources of energy. Energy-hungry AI makes that an even bigger challenge for the company to overcome. AI dominated Microsoft’s Surface event last week.

[…]

The job posting says it’s hiring someone to “lead project initiatives for all aspects of nuclear energy infrastructure for global growth.”

Microsoft is specifically looking for someone who can roll out a plan for small modular reactors (SMR).

[…]

The US Nuclear Regulatory Commission just certified an SMR design for the first time in January, which allows utilities to choose the design when applying for a license for a new power plant. And it could usher in a whole new chapter for nuclear energy.

Even so, there are still kinks to work out if Microsoft wants to rely on SMRs to power the data centers where its cloud and AI live. An SMR requires more highly enriched uranium fuel, called HALEU, than today’s traditional reactors. So far, Russia has been the world’s major supplier of HALEU. There’s a push in the US to build up a domestic supply chain of uranium, which communities near uranium mines and mills are already fighting. Then there’s the question of what to do with nuclear waste, which even a fleet of SMRs can generate significant amounts of and the US is still figuring out how to store long term

[…]

Microsoft has also made an audacious deal to purchase electricity from a company called Helion that’s developing an even more futuristic fusion power plant. Both old-school nuclear reactors and SMR designs generate electricity through nuclear fission, which is the splitting apart of atoms. Nuclear fusion, involves forcing atoms together the way stars do to create their own energy. A fusion reactor is a holy grail of sorts — it would be a source of abundant clean energy that doesn’t create the same radioactive waste as nuclear fission. But despite decades of research and recent breakthroughs, most experts say a fusion power plant is at least decades away — and the world can’t wait that long to tackle climate change.

Helion’s backers also include OpenAI CEO and ChatGPT developer Sam Altman.

[…]

Source: Microsoft is going nuclear to power its AI ambitions – The Verge

Cursed AI | Ken Loach’s 1977 film ‘Star Wars Episode IV – No Hope’

Ken Loach’s 1977 film ‘Star Wars Episode IV – No Hope’.
George Lucas was unhappy with Loach’s depressing subject matter combined with there being no actual space scenes (with all the action taking place on a UK council estate).
He immediately halted filming, recast many parts (Carrie Fisher replacing Kathy Burke for example), did extensive reshoots, and released his more family-friendly cut under new name ‘A New Hope’ (whatever that means!!)
The pair haven’t spoken since 😞
May be an image of 2 people
No photo description available.
May be an image of 1 person
[…] (25 more in the gallery)

Source: Cursed AI | Ken Loach’s 1977 film ‘Star Wars Episode IV – No Hope’ | Facebook

E-Paper News Feed Illustrates The Headlines With AI-Generated Images

It’s hard to read the headlines today without feeling like the world couldn’t possibly get much worse. And then tomorrow rolls around, and a fresh set of headlines puts the lie to that thought. On a macro level, there’s not much that you can do about that, but on a personal level, illustrating your news feed with mostly wrong, AI-generated images might take the edge off things a little.

Let us explain. [Roy van der Veen] liked the idea of an e-paper display newsfeed, but the crushing weight of the headlines was a little too much to bear. To lighten things up, he decided to employ Stable Diffusion to illustrate his feed, displaying both the headline and a generated image on a 7.3″ Inky 7-color e-paper display. Every five hours, a script running on a Raspberry Pi Zero 2W fetches a headline from a random source — we’re pleased the list includes Hackaday — and composes a prompt for Stable Diffusion based on the headline, adding on a randomly selected prefix and suffix to spice things up. For example, a prompt might look like, “Gothic painting of (Driving a Motor with an Audio Amp Chip). Gloomy, dramatic, stunning, dreamy.” You can imagine the results.

We have to say, from the examples [Roy] shows, the idea pretty much works — sometimes the images are so far off the mark that just figuring out how Stable Diffusion came up with them is enough to soften the blow. We’d have preferred if the news of the floods in Libya had been buffered by a slightly less dismal scene, but finding out that what was thought to be a “ritual mass murder” was really only a yoga class was certainly heartening.

Source: E-Paper News Feed Illustrates The Headlines With AI-Generated Images | Hackaday

WhisperFrame Depicts Your Conversations

At this point, you gotta figure that you’re at least being listened to almost everywhere you go, whether it be a home assistant or your very own phone. So why not roll with the punches and turn lemons into something like a still life of lemons that’s a bit wonky? What we mean is, why not take our conversations and use AI to turn them into art? That’s the idea behind this next-generation digital photo frame created by [TheMorehavoc].
Essentially, it uses a Raspberry Pi and a Respeaker four-mic array to listen to conversations in the room. It listens and records 15-20 seconds of audio, and sends that to the OpenWhisper API to generate a transcript.
This repeats until five minutes of audio is collected, then the entire transcript is sent through GPT-4 to extract an image prompt from a single topic in the conversation. Then, that prompt is shipped off to Stable Diffusion to get an image to be displayed on the screen. As you can imagine, the images generated run the gamut from really weird to really awesome.

The natural lulls in conversation presented a bit of a problem in that the transcription was still generating during silences, presumably because of ambient noise. The answer was in voice activity detection software that gives a probability that a voice is present.

Naturally, people were curious about the prompts for the images, so [TheMorehavoc] made a little gallery sign with a MagTag that uses Adafruit.io as the MQTT broker. Build video is up after the break, and you can check out the images here (warning, some are NSFW).

 

Source: WhisperFrame Depicts The Art Of Conversation | Hackaday

The Grammys will consider that viral song with Drake and The Weeknd AI vocals for awards after all

The person behind an AI-generated song that went viral earlier this year has submitted the track for Grammy Awards consideration. The Recording Academy has stated that such works aren’t eligible for certain gongs. However, Ghostwriter, the pseudonymous person behind “Heart on My Sleeve,” has submitted the track in the best rap song and song of the year categories, according to Variety. Both of those are songwriting honors. The Academy has suggested it’s open to rewarding tracks that are mostly written by a human, even if the actual recording is largely AI-generated.

Ghostwriter composed the song’s lyrics rather than leaving them up to, say, ChatGPT. But rather than sing or rap those words, they employed a generative AI model to mimic the vocals of Drake and The Weeknd, which helped the song to pick up buzz. The artists’ label Universal Music Group wasn’t happy about that and it filed copyright claims to remove “Heart on My Sleeve” from streaming services. Before that, though, the track racked up hundreds of thousands of listens on Spotify and more than 15 million on TikTok.

[…]

It seems there’s one major roadblock as things stand, though. For a song to be eligible for a Grammy, it needs to have “general distribution” across the US through the likes of brick-and-mortar stores, online retailers and streaming services. Ghostwriter is reportedly aware of this restriction, but it’s unclear how they plan to address that.

In any case, this may well be a canary in the coal mine for rewarding the use of generative AI in art.

[…]

Source: The Grammys will consider that viral song with Drake and The Weeknd AI vocals for awards after all

This is like saying that any song with a guitar or any song with a synthesizer won’t be considered for a Grammy

The AI Act needs a practical definition of ‘subliminal techniques’ (because those used in Advertising aren’t enough)

While the draft EU AI Act prohibits harmful ‘subliminal techniques’, it doesn’t define the term – we suggest a broader definition that captures problematic manipulation cases without overburdening regulators or companies, write Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding and Rafael A. Calvo.

Juan Pablo Bermúdez is a Research Associate at Imperial College London; Rune Nyrup is an Associate Professor at Aarhus University; Sebastian Deterding is a Chair in Design Engineering at Imperial College London; Rafael A. Calvo is a Chair in Engineering Design at Imperial College London.

If you ever worried that organisations use AI systems to manipulate you, you are not alone. Many fear that social media feeds, search, recommendation systems, or chatbots can unconsciously affect our emotions, beliefs, or behaviours.

The EU’s draft AI Act articulates this concern mentioning “subliminal techniques” that impair autonomous choice “in ways that people are not consciously aware of, or even if aware not able to control or resist” (Recital 16, EU Council version). Article 5 prohibits systems using subliminal techniques that modify people’s decisions or actions in ways likely to cause significant harm.

This prohibition could helpfully safeguard users. But as written, it also runs the risk of being inoperable. It all depends on how we define ‘subliminal techniques’ – which the draft Act does not do yet.

Why narrow definitions are bound to fail

The term ‘subliminal’ traditionally refers to sensory stimuli that are weak enough to escape conscious perception but strong enough to influence behaviour; for example, showing an image for less than 50 milliseconds.

Defining ‘subliminal techniques’ in this narrow sense presents problems. First, experts agree that subliminal stimuli have very short-lived effects at best, and only move people to do things they are already motivated to do.

Further, this would not cover most problematic cases motivating the prohibition: when an online ad influences us, we are aware of the sensory stimulus (the visible ad).

Furthermore, such legal prohibitions have been ineffective because subliminal stimuli are, by definition, not plainly visible. As Neuwirth’s historical analysis shows, Europe prohibited subliminal advertising more than three decades ago, but regulators have hardly ever pursued cases.

Thus, narrowly defining ‘subliminal techniques’ as subliminal stimulus presentation is likely to miss most manipulation cases of concern and end up as dead letter.

A broader definition can align manipulation and practical concerns

We agree with the AI Act’s starting point: AI-driven influence is often problematic due to lack of awareness.

However, unawareness of sensory stimuli is not the key issue. Rather, as we argue in a recent paper, manipulative techniques are problematic if they hide any of the following:

  • The influence attempt. Many internet users are not aware that websites adapt based on personal information to optimize “customer engagement”, sales, or other business concerns. Web content is often tailored to nudge us towards certain behaviours, while we remain unaware that such tailoring occurs.
  • The influence methods. Even when we know that some online content seeks to influence, we frequently don’t know why we are presented with a particular image or message – was it chosen through psychographic profiling, nudges, something else? Thus, we can remain unaware of how we are influenced.
  • The influence’s effects. Recommender systems are meant to learn our preferences and suggest content that aligns with them, but they can end up changing our preferences. Even if we know how we are influenced, we may still ignore how the influence changed our decisions and behaviours.

To see why this matters, ask yourself: as a user of digital services, would you rather not be informed about these influence techniques?

Or would you prefer knowing when you are targeted for influence; how influence tricks push your psychological buttons (that ‘Only 1 left!’ sign targets your aversion to loss); and what consequences influence is likely to have (the sign makes you more likely to purchase impulsively)?

We thus propose the following definition:

Subliminal techniques aim at influencing a person’s behaviour in ways in which the person is likely to remain unaware of (1) the influence attempt, (2) how the influence works, or (3) the influence attempt’s effects on decision-making or value- and belief-formation processes.

This definition is broad enough to capture most cases of problematic AI-driven influence; but not so broad as to become meaningless, nor excessively hard to put into practice. Our definition specifically targets techniques: procedures that predictably produce certain outcomes.

Such techniques are already being classified, for example, in lists of nudges and dark patterns, so companies can check those lists and ensure that they either don’t use them or disclose their usage.

Moreover, the AI Act prohibits, not subliminal techniques per se, but only those that may cause significant harm. Thus, the real (self-)regulatory burden lies with testing whether a system increases risks of significant harm—arguably already part of standard user protection diligence.

Conclusion

The default interpretation of ‘subliminal techniques’ would render the AI Act’s prohibition irrelevant for most forms of problematic manipulative influence, and toothless in practice.

Therefore, ensuring the AI Act is legally practicable and reduces regulatory uncertainty requires a different, explicit definition – one that addresses the underlying societal concerns over manipulation while not over-burdening service providers.

We believe our definition achieves just this balance.

(The EU Parliament draft added prohibitions of “manipulative or deceptive techniques”, which present challenges worth discussing separately. Here we claim that subliminal techniques prohibitions, properly defined, could tackle manipulation concerns.)

Source: The AI Act needs a practical definition of ‘subliminal techniques’ – EURACTIV.com

OpenAI disputes authors’ claims that every ChatGPT response is a derivative work, it’s transformative

This week, OpenAI finally responded to a pair of nearly identical class-action lawsuits from book authors

[…]

In OpenAI’s motion to dismiss (filed in both lawsuits), the company asked a US district court in California to toss all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at “a later stage of the case.”

The authors’ other claims—alleging vicarious copyright infringement, violation of the Digital Millennium Copyright Act (DMCA), unfair competition, negligence, and unjust enrichment—need to be “trimmed” from the lawsuits “so that these cases do not proceed to discovery and beyond with legally infirm theories of liability,” OpenAI argued.

OpenAI claimed that the authors “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

According to OpenAI, even if the authors’ books were a “tiny part” of ChatGPT’s massive data set, “the use of copyrighted materials by innovators in transformative ways does not violate copyright.”

[…]

The purpose of copyright law, OpenAI argued, is “to promote the Progress of Science and useful Arts” by protecting the way authors express ideas, but “not the underlying idea itself, facts embodied within the author’s articulated message, or other building blocks of creative,” which are arguably the elements of authors’ works that would be useful to ChatGPT’s training model. Citing a notable copyright case involving Google Books, OpenAI reminded the court that “while an author may register a copyright in her book, the ‘statistical information’ pertaining to ‘word frequencies, syntactic patterns, and thematic markers’ in that book are beyond the scope of copyright protection.”

[…]

Source: OpenAI disputes authors’ claims that every ChatGPT response is a derivative work | Ars Technica

So the authors are saying that if you read their book and then are inspired by it, you can’t use that memory – any of it – to write another book. Which also means that you presumably wouldn’t be able to use any words at all, as they are all copyrighted entities which have inspired you in the past as well.

Paralysed woman able to ‘speak’ through digital avatar

 

A severely paralysed woman has been able to speak through an avatar using technology that translated her brain signals into speech and facial expressions.

[…]

The latest technology uses tiny electrodes implanted on the surface of the brain to detect electrical activity in the part of the brain that controls speech and face movements. These signals are translated directly into a digital avatar’s speech and facial expressions including smiling, frowning or surprise.

[…]

The patient, a 47-year-old woman, Ann, has been severely paralysed since suffering a brainstem stroke more than 18 years ago. She cannot speak or type and normally communicates using movement-tracking technology that allows her to slowly select letters at up to 14 words a minute. She hopes the avatar technology could enable her to work as a counsellor in future.

The team implanted a paper-thin rectangle of 253 electrodes on to the surface of Ann’s brain over a region critical for speech. The electrodes intercepted the brain signals that, if not for the stroke, would have controlled muscles in her tongue, jaw, larynx and face.

After implantation, Ann worked with the team to train the system’s AI algorithm to detect her unique brain signals for various speech sounds by repeating different phrases repeatedly.

The computer learned 39 distinctive sounds and a Chat GPT-style language model was used to translate the signals into intelligible sentences. This was then used to control an avatar with a voice personalised to sound like Ann’s voice before the injury, based on a recording of her speaking at her wedding.

The technology was not perfect, decoding words incorrectly 28% of the time in a test run involving more than 500 phrases, and it generated brain-to-text at a rate of 78 words a minute, compared with the 110-150 words typically spoken in natural conversation.

[…]

Prof Nick Ramsey, a neuroscientist at the University of Utrecht in the Netherlands, who was not involved in the research, said: “This is quite a jump from previous results. We’re at a tipping point.”

A crucial next step is to create a wireless version of the BCI that could be implanted beneath the skull.

[…]

Source: Paralysed woman able to ‘speak’ through digital avatar in world first | Neuroscience | The Guardian

Our Inability To Recognize That Remixing Art Is Transformative Is Now Leading To Today’s AI/Copyright Mess

If you’ve never watched it, Kirby Ferguson’s “Everything is a Remix” series (which was recently updated from the original version that came out years ago) is an excellent look at how stupid our copyright laws are, and how they have really warped our view of creativity. As the series makes clear, creativity is all about remixing: taking inspiration and bits and pieces from other parts of culture and remixing them into something entirely new. All creativity involves this in some manner or another. There is no truly unique creativity.

And yet, copyright law assumes the opposite is true. It assumes that most creativity is entirely unique, and when remix and inspiration get too close, the powerful hand of the law has to slap people down.

[…]

It would have been nice if society had taken this issue seriously back then, recognized that “everything is a remix,” and that encouraging remixing and reusing the works of others to create something new and transformative was not just a good thing, but one that should be supported. If so, we might not be in the utter shitshow that is the debate over generative art from AI these days, in which many creators are rushing to AI to save them, even though that’s not what copyright was designed to do, nor is it a particularly useful tool in that context.

[…]

The moral panic is largely an epistemological crisis: We don’t have a socially acceptable status for the legibility of the remix as art-in-it’s-own-right. Instead of properly appreciating the remix and the art of the DJ, the remix, or the meme cultures, we have shoehorned all the cultural properties associated onto an 1800’s sheet music publishing -based model of artistic credibility. The fit was never really good, but no-one really cared because the scenes were small, underground and their breaking the rules was largely out-of-sight.

[…]

AI art tools are simply resurfacing an old problem we left behind unresolved during the 1980’s to early 2000’s. Now it’s time for us to blow the dust off these old books and apply what was learned to the situation we have at our hands now.

We should not forget the modern electronic dance music industry has already developed models that promote new artists via remixes of their work from more established artists. These real-world examples combined with the theoretical frameworks above should help us to explore a refreshed model of artistic credibility, where value is assigned to both the original artists and the authors of remixers

[…]

Art, especially popular forms of it, has always been a lot about transformation: Taking what exists and creating something that works in this particular context. In forms of art emphasizing the distinctiveness of the original less, transformation becomes the focus of the artform instead.

[…]

There are a lot of questions about how that would actually work in practice, but I do think this is a useful framework for thinking about some of these questions, challenging some existing assumptions, and trying to rethink the system into one that is actually helping creators and helping to enable more art to be created, rather than trying to leverage a system originally developed to provide monopolies to gatekeepers into one that is actually beneficial to the public who want to experience art, and creators who wish to make art.

Source: Our Inability To Recognize That Remixing Art Is Transformative Is Now Leading To Today’s AI/Copyright Mess | Techdirt

AI-generated art cannot be copyrighted, judge rules – Only humans can be creative, apparently

Copyright issues have dogged AI since chatbot tech gained mass appeal, whether it’s accusations of entire novels being scraped to train ChatGPT or allegations that Microsoft and GitHub’s Copilot is pilfering code.

But one thing is for sure after a ruling [PDF] by the United States District Court for the District of Columbia – AI-created works cannot be copyrighted.

You’d think this was a simple case, but it has been rumbling on for years at the hands of one Stephen Thaler, founder of Missouri neural network biz Imagination Engines, who tried to copyright artwork generated by what he calls the Creativity Machine, a computer system he owns. The piece, A Recent Entrance to Paradise, pictured below, was reproduced on page 4 of the complaint [PDF]:

The US Copyright Office refused the application because copyright laws are designed to protect human works. “The office will not register works ‘produced by a machine or mere mechanical process’ that operates ‘without any creative input or intervention from a human author’ because, under the statute, ‘a work must be created by a human being’,” the review board told Thaler’s lawyer after his second attempt was rejected last year.

This was not a satisfactory response for Thaler, who then sued the US Copyright Office and its director, Shira Perlmutter. “The agency actions here were arbitrary, capricious, an abuse of discretion and not in accordance with the law, unsupported by substantial evidence, and in excess of Defendants’ statutory authority,” the lawsuit claimed.

But handing down her ruling on Friday, Judge Beryl Howell wouldn’t budge, pointing out that “human authorship is a bedrock requirement of copyright” and “United States copyright law protects only works of human creation.”

“Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them,” she wrote.

Though she acknowledged the need for copyright to “adapt with the times,” she shut down Thaler’s pleas by arguing that copyright protection can only be sought for something that has “an originator with the capacity for intellectual, creative, or artistic labor. Must that originator be a human being to claim copyright protection? The answer is yes.”

Unsurprisingly Thaler’s legal people took an opposing view. “We strongly disagree with the district court’s decision,” University of Surrey Professor Ryan Abbott told The Register.

“In our view, the law is clear that the American public is the primary beneficiary of copyright law, and the public benefits when the generation and dissemination of new works are promoted, regardless of how those works are made. We do plan to appeal.”

This is just one legal case Thaler is involved in. Earlier this year, the US Supreme Court also refused to hear arguments that AI algorithms should be recognized by law as inventors on patent filings, once again brought by Thaler.

He sued the US Patent and Trademark Office (USPTO) in 2020 because patent applications he had filed on behalf of another of his AI systems, DABUS, were rejected. The USPTO refused to accept them as it could only consider inventions from “natural persons.”

That lawsuit was quashed then was taken to the US Court of Appeals, where it lost again. Thaler’s team finally turned to the Supreme Court, which wouldn’t give it the time of day.

When The Register asked Thaler to comment on the US Copyright Office defeat, he told us: “What can I say? There’s a storm coming.”

Source: AI-generated art cannot be copyrighted, judge rules • The Register

Scientists Recreate Pink Floyd Song By Reading Brain Signals of Listeners

Scientists have trained a computer to analyze the brain activity of someone listening to music and, based only on those neuronal patterns, recreate the song. The research, published on Tuesday, produced a recognizable, if muffled version of Pink Floyd’s 1979 song, “Another Brick in the Wall (Part 1).” […] To collect the data for the study, the researchers recorded from the brains of 29 epilepsy patients at Albany Medical Center in New York State from 2009 to 2015. As part of their epilepsy treatment, the patients had a net of nail-like electrodes implanted in their brains. This created a rare opportunity for the neuroscientists to record from their brain activity while they listened to music. The team chose the Pink Floyd song partly because older patients liked it. “If they said, ‘I can’t listen to this garbage,'” then the data would have been terrible, Dr. Schalk said. Plus, the song features 41 seconds of lyrics and two-and-a-half minutes of moody instrumentals, a combination that was useful for teasing out how the brain processes words versus melody.

Robert Knight, a neuroscientist at the University of California, Berkeley, and the leader of the team, asked one of his postdoctoral fellows, Ludovic Bellier, to try to use the data set to reconstruct the music “because he was in a band,” Dr. Knight said. The lab had already done similar work reconstructing words. By analyzing data from every patient, Dr. Bellier identified what parts of the brain lit up during the song and what frequencies these areas were reacting to. Much like how the resolution of an image depends on its number of pixels, the quality of an audio recording depends on the number of frequencies it can represent. To legibly reconstruct “Another Brick in the Wall,” the researchers used 128 frequency bands. That meant training 128 computer models, which collectively brought the song into focus. The researchers then ran the output from four individual brains through the model. The resulting recreations were all recognizably the Pink Floyd song but had noticeable differences. Patient electrode placement probably explains most of the variance, the researchers said, but personal characteristics, like whether a person was a musician, also matter.

The data captured fine-grained patterns from individual clusters of brain cells. But the approach was also limited: Scientists could see brain activity only where doctors had placed electrodes to search for seizures. That’s part of why the recreated songs sound like they are being played underwater. […] The researchers also found a spot in the brain’s temporal lobe that reacted when volunteers heard the 16th notes of the song’s guitar groove. They proposed that this particular area might be involved in our perception of rhythm. The findings offer a first step toward creating more expressive devices to assist people who can’t speak. Over the past few years, scientists have made major breakthroughs in extracting words from the electrical signals produced by the brains of people with muscle paralysis when they attempt to speak.

Source: Scientists Recreate Pink Floyd Song By Reading Brain Signals of Listeners – Slashdot

Snapchat’s My AI Goes Rogue, Posts To Stories

On Tuesday, Snapchat’s My AI in-app chatbot posted its own Story to the app that appeared to be a photo of a wall and ceiling. It then stopped responding to users’ messages, which some Snapchat users found disconcerting. TechCrunch reports: Though the incident made for some great tweets (er, posts), we regret to inform you that My AI did not develop self-awareness and a desire to express itself through Snapchat Stories. Instead, the situation arose because of a technical outage, just as the bot explained. Snap confirmed the issue, which was quickly addressed last night, was just a glitch. (And My AI wasn’t snapping photos of your room, by the way). “My AI experienced a temporary outage that’s now resolved,” a spokesperson told TechCrunch.

However, the incident does raise the question as to whether or not Snap was considering adding new functionality to My AI that would allow the AI chatbot to post to Stories. Currently, the AI bot sends text messages and can even Snap you back with images — weird as they may be. But does it do Stories? Not yet, apparently. “At this time, My AI does not have Stories feature,” a Snap spokesperson told us, leaving us to wonder if that may be something Snap has in the works.

Source: Snapchat’s My AI Goes Rogue, Posts To Stories – Slashdot

The Fear Of AI and Entitled Cancel Culture Just Killed A Very Useful Tool: Prosecraft

I do understand why so many people, especially creative folks, are worried about AI and how it’s used. The future is quite unknown, and things are changing very rapidly, at a pace that can feel out of control. However, when concern and worry about new technologies and how they may impact things morphs into mob-inspiring fear, dumb things happen. I would much rather that when we look at new things, we take a more realistic approach to them, and look at ways we can keep the good parts of what they provide, while looking for ways to mitigate the downsides.

Hopefully without everyone going crazy in the meantime. Unfortunately, that’s not really the world we live in.

Last year, when everyone was focused on generative AI for images, we had Rob Sheridan on the podcast to talk about why it was important for creative people to figure out how to embrace the technology rather than fear it. The opening story of the recent NY Times profile of me was all about me in a group chat, trying to suggest to some very creative Hollywood folks how to embrace AI rather than simply raging against it. And I’ve already called out how folks rushing to copyright, thinking that will somehow “save” them from AI, are barking up the wrong tree.

But, in the meantime, the fear over AI is leading to some crazy and sometimes unfortunate outcomes. Benji Smith, who created what appears to be an absolutely amazing tool for writers, Shaxpir, also created what looked like an absolutely fascinating tool called Prosecraft, that had scanned and analyzed a whole bunch of books and would let you call up really useful data on books.

He created it years ago, based on an idea he had years earlier, trying to understand the length of various books (which he initially kept in a spreadsheet). As Smith himself describes in a blog post:

I heard a story on NPR about how Kurt Vonnegut invented an idea about the “shapes of stories” by counting happy and sad words. The University of Vermont “Computational Story Lab” published research papers about how this technique could show the major plot points and the “emotional story arc” of the Harry Potter novels (as well as many many other books).

So I tried it myself and found that I could plot a graph of the emotional ups and downs of any story. I added those new “sentiment analysis” tools to the prosecraft website too.

When I ran out of books on my own shelves, I looked to the internet for more text that I could analyze, and I used web crawlers to find more books. I wanted to be mindful of the diversity of different stories, so I tried to find books by authors of every race and gender, from every different cultural and political background, writing in every different genre and exploring all different kinds of themes. Fiction and nonfiction and philosophy and science and religion and culture and politics.

Somewhere out there on the internet, I thought to myself, there was a new author writing a horror or romance or fantasy novel, struggling for guidance about how long to write their stories, how to write more vivid prose, and how much “passive voice” was too much or too little.

I wanted to give those budding storytellers a suite of “lexicographic” tools that they could use, to compare their own writing with the writing of authors they admire. I’ve been working in the field of computational linguistics and machine learning for 20+ years, and I was always frustrated that the fancy tools were only accessible to big businesses and government spy agencies. I wanted to bring that magic to everyone.

Frankly, all of that sounds amazing. And amazingly useful. Even more amazing is that he built it, and it worked. It would produce useful analysis of books, such as this example from Alice’s Adventures in Wonderland:

And, it could also do further analysis like the following:

This is all quite interesting. It’s also the kind of thing that data scientists do on all kinds of work for useful purposes.

Smith built Prosecraft into Shaxpir, again, making it a more useful tool. But, on Monday, some authors on the internet found out about it and lost their shit, leading Smith to shut the whole project down.

There seems to be a lot of misunderstanding about all of this. Smith notes that he had researched the copyright issues and was sure he wasn’t violating anything, and he’s right. We’ve gone over this many times before. Scanning books is pretty clearly fair use. What you do with that later could violate copyright law, but I don’t see anything that Prosecraft did that comes anywhere even remotely close to violating copyright law.

But… some authors got pretty upset about all of it.

I’m still perplexed at what the complaint is here? You don’t need to “consent” for someone to analyze your book. You don’t need to “consent” to someone putting up statistics about their analysis of your book.

But, Zach’s tweet went viral with a bunch of folks ready to blow up anything that smacks of tech bro AI, and lots of authors started yelling at Smith.

The Gizmodo article has a ridiculously wrong “fair use” analysis, saying “Fair Use does not, by any stretch of the imagination, allow you to use an author’s entire copyrighted work without permission as a part of a data training program that feeds into your own ‘AI algorithm.’” Except… it almost certainly does? Again, we’ve gone through this with the Google Book scanning case, and the courts said that you can absolutely do that because it’s transformative.

It seems that what really tripped up people here was the “AI” part of it, and the fear that this was just another a VC funded “tech bro” exercise of building something to get rich by using the works of creatives. Except… none of that is accurate. As Smith explained in his blog post:

For what it’s worth, the prosecraft website has never generated any income. The Shaxpir desktop app is a labor of love, and during most of its lifetime, I’ve worked other jobs to pay the bills while trying to get the company off the ground and solve the technical challenges of scaling a startup with limited resources. We’ve never taken any VC money, and the whole company is a two-person operation just working our hardest to serve our small community of authors.

He also recognizes that the concerns about it being some “AI” thing are probably what upset people, but plenty of authors have found the tool super useful, and even added their own books:

I launched the prosecraft website in the summer of 2017, and I started showing it off to authors at writers conferences. The response was universally positive, and I incorporated the prosecraft analytic tools into the Shaxpir desktop application so that authors could privately run these analytics on their own works-in-progress (without ever sharing those analyses publicly, or even privately with us in our cloud).

I’ve spent thousands of hours working on this project, cleaning up and annotating text, organizing and tweaking things. A small handful of authors have even reached out to me, asking to have their books added to the website. I was grateful for their enthusiasm.

But in the meantime, “AI” became a thing.

And the arrival of AI on the scene has been tainted by early use-cases that allow anyone to create zero-effort impersonations of artists, cutting those creators out of their own creative process.

That’s not something I ever wanted to participate in.

Smith took the project down entirely because of that. He doesn’t want to get lumped in with other projects, and even though his project is almost certainly legal, he recognized that this was becoming an issue:

Today the community of authors has spoken out, and I’m listening. I care about you, and I hear your objections.

Your feelings are legitimate, and I hope you’ll accept my sincerest apologies. I care about stories. I care about publishing. I care about authors. I never meant to hurt anyone. I only hoped to make something that would be fun and useful and beautiful, for people like me out there struggling to tell their own stories.

I find all of this really unfortunate. Smith built something really cool, really amazing, that does not, in any way, infringe on anyone’s rights. I get the kneejerk reaction from some authors, who feared that this was some obnoxious project, but couldn’t they have taken 10 minutes to look at the details of what it was they were killing?

I know we live in an outrage era, where the immediate reaction is to turn the outrage meter up to 11. I’m certainly guilty of that at times myself. But this whole incident is just sad. It was an overreaction from the start, destroying what had been a clear labor of love and a useful project, through misleading and misguided attacks from authors.

Source: The Fear Of AI Just Killed A Very Useful Tool | Techdirt

What? AI-Generated Art Banned from Future Dungeons & Dragons Books After “Fan Uproar” (Or ~1600 tweets about it)

A Dungeons & Dragons expansion book included AI-generated artwork. Fans on Twitter spotted it before the book was even released (noting, among other things, a wolf with human feet). An embarrassed representative for Wizards of the Coast then tweeted out an announcement about new guidelines stating explicitly that “artists must refrain from using AI art generation as part of their creation process for developing D&D art.” GeekWire reports: The artist in question, Ilya Shkipin, is a California-based painter, illustrator, and operator of an NFT marketplace, who has worked on projects for Renton, Wash.-based Wizards of the Coast since 2014. Shkipin took to Twitter himself on Friday, and acknowledged in several now-deleted tweets that he’d used AI tools to “polish” several original illustrations and concept sketches. As of Saturday morning, Shkipin had taken down his original tweets and announced that the illustrations for Glory of the Giants are “going to be reworked…”

While the physical book won’t be out until August 15, the e-book is available now from Wizards’ D&D Beyond digital storefront.
Wizards of the Coast emphasized this won’t happen again. About this particular incident, they noted “We have worked with this artist since 2014 and he’s put years of work into books we all love. While we weren’t aware of the artist’s choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards’ work moving forward.”

GeekWire adds that the latest D&D video game, Baldur’s Gate 3, “went into its full launch period on Tuesday. Based on metrics such as its player population on Steam, BG3 has been an immediate success, with a high of over 709,000 people playing it concurrently on Saturday afternoon.”

Source: AI-Generated Art Banned from Future ‘Dungeons & Dragons’ Books After Fan Uproar – Slashdot

Really? 1600 tweets about this is considered an “uproar” and was enough to change policy into anti-AI? So if you actually look at the pictures, only the wolf with human feet was strange and the rest of the comments weren’t in my eyes. Welcome to life – we have AI’s now and people are going to use them. They are going to save artists loads of time and allow them to create really really cool stuff… like these pictures!

Come on Wizards of the Coast, don’t be luddites.

AI-assisted mammogram cancer screening could cut radiologist workloads in half

A newly published study in the the Lancet Oncology journal has found that the use of AI in mammogram cancer screening can safely cut radiologist workloads nearly in half without risk of increasing false-positive results. In effect, the study found that the AI’s recommendations were on par with those of two radiologists working together.

“AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe,” the study found.

The study was performed by a research team out of Lund University in Sweden and, accordingly, followed 80,033 Swedish women (average age of 54) for just over a year in 2021-2022 . Of the 39,996 patients that were randomly assigned AI-empowered breast cancer screenings, 28 percent or 244 tests returned screen-detected cancers. Of the other 40,024 patients that received conventional cancer screenings, just 25 percent, or 203 tests, returned screen-detected cancers.

Of those extra 41 cancers detected by the AI side, 19 turned out to be invasive. Both the AI-empowered and conventional screenings ran a 1.5 percent false positive rate. Most impressively, radiologists on the the AI side had to look at 36,886 fewer screen readings than their counterparts, a 44 percent reduction in their workload.

[…]

Source: AI-assisted cancer screening could cut radiologist workloads in half | Engadget

IBM and NASA open source satellite-image-labeling AI model

IBM and NASA have put together and released Prithvi: an open source foundation AI model that may help scientists and other folks analyze satellite imagery.

The vision transformer model, released under an Apache 2 license, is relatively small at 100 million parameters, and was trained on a year’s worth of images collected by the US space boffins’ Harmonized Landsat Sentinel-2 (HLS) program. As well as the main model, three variants of Prithvi are available, fine-tuned for identifying flooding; wildfire burn scars; and crops and other land use.

Essentially, it works like this: you feed one of the models an overhead satellite photo, and it labels areas in the snap it understands. For example, the variant fine-tuned for crops can point out where there’s probably water, forests, corn fields, cotton fields, developed land, wetlands, and so on.

This collection, we imagine, would be useful for, say, automating the study of changes to land over time – such as tracking erosion from flooding, or how drought and wildfires have hit a region. Big Blue and NASA aren’t the first to do this with machine learning: there are plenty of previous efforts we could cite.

A demo of the crop-classifying Prithvi model can be found here. Provide your own satellite imagery or use one of the examples at the bottom of the page. Click Submit to run the model live.

“We believe that foundation models have the potential to change the way observational data is analyzed and help us to better understand our planet,” Kevin Murphy, chief science data officer at NASA, said in a statement. “And by open sourcing such models and making them available to the world, we hope to multiply their impact.”

Developers can download the models from Hugging Face here.

There are other online demos of Prithvi, such as this one for the variant fine-tuned for bodies of water; this one for detecting wildfire scars; and this one that shows off the model’s ability to reconstruct partially photographed areas.

[…]

Source: IBM and NASA open source satellite-image-labeling AI model • The Register

AI-enabled brain implant helps spine damaged patient regain feeling and movement

Keith Thomas from New York was involved in a driving accident back in 2020 that injured his spine’s C4 and C5 vertebrae, leading to a total loss in feeling and movement from the chest down. Recently, though, Thomas had been able to move his arm at will and feel his sister hold his hand, thanks to the AI brain implant technology developed by the Northwell Health’s Feinstein Institute of Bioelectronic Medicine.

The research team first spent months mapping his brain with MRIs to pinpoint the exact parts of his brain responsible for arm movements and the sense of touch in his hands. Then, four months ago, surgeons performed a 15-hour procedure to implant microchips into his brain — Thomas was even awake for some parts so he could tell them what sensations he was feeling in his hand as they probed parts of the organ.

While the microchips are inside his body, the team also installed external ports on top of his head. Those ports connect to a computer with the artificial intelligence (AI) algorithms that the team developed to interpret his thoughts and turn them into action. The researchers call this approach “thought-driven therapy,” because it all starts with the patient’s intentions. If he thinks of wanting to move his hand, for instance, his brain implant sends signals to the computer, which then sends signals to the electrode patches on his spine and hand muscles in order to stimulate movement. They attached sensors to his fingertips and palms, as well, to stimulate sensation.

Thanks to this system, he was able to move his arm at will and feel his sister holding his hand in the lab. While he needed to be attached to the computer for those milestones, the researchers say Thomas has shown signs of recovery even when the system is off. His arm strength has apparently “more than doubled” since the study began, and his forearm and wrist could now feel some new sensations. If all goes well, the team’s thought-driven therapy could help him regain more of his sense of touch and mobility.

While the approach has a ways to go, the team behind it is hopeful that it could change the lives of people living with paralysis.

[…]

Source: AI-enabled brain implant helps patient regain feeling and movement | Engadget