AI trained on millions of life stories can predict risk of early death

An artificial intelligence trained on personal data covering the entire population of Denmark can predict people’s chances of dying more accurately than any existing model, even those used in the insurance industry. The researchers behind the technology say it could also have a positive impact in early prediction of social and health problems – but must be kept out of the hands of big business.

Sune Lehmann Jørgensen at the Technical University of Denmark and his colleagues used a rich dataset from Denmark that covers education, visits to doctors and hospitals, any resulting diagnoses, income and occupation for 6 million people from 2008 to 2020.

They converted this dataset into words that could be used to train a large language model, the same technology that powers AI apps such as ChatGPT. These models work by looking at a series of words and determining which word is statistically most likely to come next, based on vast amounts of examples. In a similar way, the researchers’ Life2vec model can look at a series of life events that form a person’s history and determine what is most likely to happen next.

In experiments, Life2vec was trained on all but the last four years of the data, which was held back for testing. The researchers took data on a group of people aged 35 to 65, half of whom died between 2016 and 2020, and asked Life2vec to predict which who lived and who died. It was 11 per cent more accurate than any existing AI model or the actuarial life tables used to price life insurance policies in the finance industry.

The model was also able to predict the results of a personality test in a subset of the population more accurately than AI models trained specifically to do the job.

Jørgensen believes that the model has consumed enough data that it is likely to be able to shed light on a wide range of health and social topics. This means it could be used to predict health issues and catch them early, or by governments to reduce inequality. But he stresses that it could also be used by companies in a harmful way.

“Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this this burden,” says Jørgensen.

But technologies like this are already out there, he says. “They’re likely being used on us already by big tech companies that have tonnes of data about us, and they’re using it to make predictions about us.”

Source: AI trained on millions of life stories can predict risk of early death | New Scientist

How To Build Your Own Custom ChatGPT Bot

There’s something new and powerful for ChatGPT users to play around with: Custom GPTs. These bespoke bots are essentially more focused, more specific versions of the main ChatGPT model, enabling you to build something for a particular purpose without using any coding or advanced knowledge of artificial intelligence.

The name GPT stands for Generative Pre-trained Transformer, as it does in ChatGPT. Generative is the ability to produce new content outside of what an AI was trained on. Pre-trained indicates that it’s already been trained on a significant amount of material, and Transformer is a type of AI architecture adept at understanding language.

You might already be familiar with using prompts to style the responses of ChatGPT: You can tell it to answer using simple language, for example, or to talk to you as if it were an alien from another world. GPTs build on this idea, enabling you to create a bot with a specific personality.

You can build a GPT using a question-and-answer routine.
You can build a GPT using a question-and-answer routine.
Screenshot: ChatGPT

What’s more, you can upload your own material to add to your GPT’s knowledge banks—it might be samples of your own writing, for instance, or copies of reports produced by your company. GPTs will always have access to the data you upload to them and be able to browse the web at large.

GPTs are exclusive to Plus and Enterprise users, though everyone should get access soon. OpenAI plans to open a GPT store where you can sell your AI bot creations if you think others will find them useful, too. Think of an app store of sorts but for bespoke AI bots.

“GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others,” explains OpenAI in a blog post. “For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.”

Getting started with GPT building

Assuming you have a Plus or Enterprise account, click Explore on the left of the web interface to see some example GPTs: There’s one to help you with your creative writing, for example, and one to produce a particular style of digital painting. When you’re ready to start building your own, click Create a GPT at the top.

There are two tabs to swap between: Create for building a GPT through a question-and-answer routine and Configure for more deliberate GPT production. If you’re just getting started, it’s best to stick with Create, as it’s a more user-friendly option and takes you step-by-step through the process.

Respond to the prompts of the GPT Builder bot to explain what you want the new GPT to be able to do: Explain certain concepts, give advice in specific areas, generate particular kinds of text or images, or whatever it is. You’ll be asked to give the GPT a name and choose an image for it, though you’ll get suggestions for these, too.

You’re able to test out your GPT as you build it.
You’re able to test out your GPT as you build it.
Screenshot: ChatGPT

As you answer the prompts from the builder, the GPT will begin to take form in the preview pane on the right—together with some example inputs that you might want to give to it. You might be asked about specific areas of expertise that you want the bot to have and the sorts of answers you want the bot to give in terms of their length and complexity. The building process will vary though, depending on the GPT you’re creating.

After you’ve worked through the basics of making a GPT, you can try it out and switch to the Configure tab to add more detail and depth. You’ll see that your responses so far have been used to craft a set of instructions for the GPT about its identity and how it should answer your questions. Some conversation starters will also be provided.

You can edit these instructions if you need to and click Upload files to add to the GPT’s knowledge banks (handy if you want it to answer questions about particular documents or topics, for instance). Most common document formats, including PDFs and Word files, seem to be supported, though there’s no official list of supported file types.

GPTs can be kept to yourself or shared with others.
GPTs can be kept to yourself or shared with others.
Screenshot: ChatGPT

The checkboxes at the bottom of the Configure tab let you choose whether or not the GPT has access to web browsing, DALL-E image creation, and code interpretation capabilities, so make your choices accordingly. If you add any of these capabilities, they’ll be called upon as and when needed—there’s no need to specifically ask for them to be used, though you can if you want.

When your GPT is working the way you want it to, click the Save button in the top right corner. You can choose to keep it to yourself or make it available to share with others. After you click on Confirm, you’ll be able to access the new GPT from the left-hand navigation pane in the ChatGPT interface on the web.

GPTs are ideal if you find yourself often asking ChatGPT to complete tasks in the same way or cover the same topics—whether that’s market research or recipe ideas. The GPTs you create are available whenever you need them, alongside access to the main ChatGPT engine, which you can continue to tweak and customize as needed.

Source: How To Build Your Own Custom ChatGPT Bot

MS Phi-2 small language model – outperforms many LLMs but fits on your laptop

We are now releasing Phi-2 (opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 (opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models.

[..]

Phi-2 is a Transformer-based model with a next-word prediction objective, trained on 1.4T tokens from multiple passes on a mixture of Synthetic and Web datasets for NLP and coding. The training for Phi-2 took 14 days on 96 A100 GPUs. Phi-2 is a base model that has not undergone alignment through reinforcement learning from human feedback (RLHF), nor has it been instruct fine-tuned. Despite this, we observed better behavior with respect to toxicity and bias compared to existing open-source models that went through alignment (see Figure 3). This is in line with what we saw in Phi-1.5 due to our tailored data curation technique, see our previous tech report (opens in new tab) for more details on this. For more information about the Phi-2 model, please visit Azure AI | Machine Learning Studio (opens in new tab).

A barplot comparing the safety score of Phi-1.5, Phi-2, and Llama-7B models on 13 categories of the ToxiGen benchmark. Phi-1.5 achieves the highest score on all categories, Phi-2 achieves the second-highest scores and Llama-7B achieves the lowest scores across all categories.
Figure 3. Safety scores computed on 13 demographics from ToxiGen. A subset of 6541 sentences are selected and scored between 0 to 1 based on scaled perplexity and sentence toxicity. A higher score indicates the model is less likely to produce toxic sentences compared to benign ones.
[…]

With only 2.7 billion parameters, Phi-2 surpasses the performance of Mistral and Llama-2 models at 7B and 13B parameters on various aggregated benchmarks. Notably, it achieves better performance compared to 25x larger Llama-2-70B model on muti-step reasoning tasks, i.e., coding and math. Furthermore, Phi-2 matches or outperforms the recently-announced Google Gemini Nano 2, despite being smaller in size.
[…]

Model Size BBH Commonsense
Reasoning
Language
Understanding
Math Coding
Llama-2 7B 40.0 62.2 56.7 16.5 21.0
13B 47.8 65.0 61.9 34.2 25.4
70B 66.5 69.2 67.6 64.1 38.3
Mistral 7B 57.2 66.4 63.7 46.4 39.4
Phi-2 2.7B 59.2 68.8 62.0 61.1 53.7
Table 1. Averaged performance on grouped benchmarks compared to popular open-source SLMs.
Model Size BBH BoolQ MBPP MMLU
Gemini Nano 2 3.2B 42.4 79.3 27.2 55.8
Phi-2 2.7B 59.3 83.3 59.1 56.7
Table 2. Comparison between Phi-2 and Gemini Nano 2 Model on Gemini’s reported benchmarks.

Source: Phi-2: The surprising power of small language models – Microsoft Research

AI Doomsayers: Debunking the Despair

Shortly after ChatGPT’s release, a cadre of critics rose to fame claiming AI would soon kill us. As wondrous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and “60 Minutes” interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about AI autonomously hacking the financial system — or worse. And last week, President Biden issued an executive order imposing some restraints on AI development.

AI Experts Dismiss Doom, Defend Progress

That was enough for several prominent AI researchers who finally started pushing back hard after watching the so-called AI Doomers influence the narrative and, therefore, the field’s future. Andrew Ng, the soft-spoken co-founder of Google Brain, said last week that worries of AI destruction had led to a “massively, colossally dumb idea” of requiring licenses for AI work. Yann LeCun, a machine-learning pioneer, eviscerated research-pause letter writer Max Tegmark, accusing him of risking “catastrophe” by potentially impeding AI progress and exploiting “preposterous” concerns. A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown. “If ‘emergence’ merely unlocks capabilities represented in pre-training data,” said Princeton professor Arvind Narayanan, “the gravy train will run out soon.”

 

Three robots with glowing red eyes indicating AI is a threat to human existence.
A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown.OleCNX on Adobe Stock Photos

 

Related Article: Can We Fix Artificial Intelligence’s Serious PR Problem?

AI Doom Hype Benefits Tech Giants

Worrying about AI safety isn’t wrongheaded, but these Doomers’ path to prominence has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying Doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind and Anthropic, for instance, signed a statement putting AI extinction risk on the same plane as nuclear war and pandemics. Perhaps they’re not consciously attempting to block competition, but they can’t be that upset it might be a byproduct.

AI Alarmism Spurs Restrictive Government Policies

Because all this alarmism makes politicians feel compelled to do something, leading to proposals for strict government oversight that could restrict AI development outside a few firms. Intense government involvement in AI research would help big companies, which have compliance departments built for these purposes. But it could be devastating for smaller AI startups and open-source developers who don’t have the same luxury.

 

Doomer Rhetoric: Big Tech’s Unlikely Ally

“There’s a possibility that AI doomers could be unintentionally aiding big tech firms,” Garry Tan, CEO of startup accelerator Y Combinator, told me. “By pushing for heavy regulation based on fear, they give ammunition to those attempting to create a regulatory environment that only the biggest players can afford to navigate, thus cementing their position in the market.”

Ng took it a step further. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” he told the Australian Financial Review.

Doomers’ AI Fears Lack Substance

The AI Doomers’ worries, meanwhile, feel pretty thin. “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably — and then kill us,” Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, told a rapt audience at TED this year. He confessed he didn’t know how or why an AI would do it. “It could kill us because it doesn’t want us making other superintelligences to compete with it,” he offered.

Bankman Fried Scandal Should Ignite Skepticism

After Sam Bankman Fried ran off with billions while professing to save the world through “effective altruism,” it’s high time to regard those claiming to improve society while furthering their business aims with relentless skepticism. As the Doomer narrative presses on, it threatens to rhyme with a familiar pattern.

AI Fear Tactics Threaten Open-Source Movement

Big Tech companies already have a significant lead in the AI race via cloud computing services that they lease out to preferred startups in exchange for equity. Further advantaging them might hamstring the promising open-source AI movement — a crucial area of competition — to the point of obsolescence. That’s probably why you’re hearing so much about AI destroying the world. And why it should be considered with a healthy degree of caution.

Source: AI Doomsayers: Debunking the Despair

Mind-reading AI can translate brainwaves into written text

Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thoughts into written words.

In the study, participants read passages of text while wearing a cap that recorded electrical brain activity through their scalp. These electroencephalogram (EEG) recordings were then converted into text using an AI model called DeWave.

Chin-Teng Lin at the University of Technology Sydney (UTS), Australia, says the technology is non-invasive, relatively inexpensive and easily transportable.

While the system is far from perfect, with an accuracy of approximately 40 per cent, Lin says more recent data currently being peer-reviewed shows an improved accuracy exceeding 60 per cent.

In the study presented at the NeurIPS conference in New Orleans, Louisiana, participants read the sentences aloud, even though the DeWave program doesn’t use spoken words. However, in the team’s latest research, participants read the sentences silently.

Last year, a team led by Jerry Tang at the University of Texas at Austin reported a similar accuracy in converting thoughts to text, but MRI scans were used to interpret brain activity. Using EEG is more practical, as subjects don’t have to lie still inside a scanner.

[…]

Source: Mind-reading AI can translate brainwaves into written text | New Scientist

AI made from living human brain cells performs speech recognition

Balls of human brain cells linked to a computer have been used to perform a very basic form of speech recognition. The hope is that such systems will use far less energy for AI tasks than silicon chips.

“This is just proof-of-concept to show we can do the job,” says Feng Guo at Indiana University Bloomington. “We do have a long way to go.”

Brain organoids are lumps of nerve cells that form when stem cells are grown in certain conditions. “They are like mini-brains,” says Guo.

It takes two or three months to grow the organoids, which are a few millimetres wide and consist of as many as 100 million nerve cells, he says. Human brains contain around 100 billion nerve cells.

The organoids are then placed on top of a microelectrode array, which is used both to send electrical signals to the organoid and to detect when nerve cells fire in response. The team calls its system “Brainoware”.

New Scientist reported in March that Guo’s team had used this system to try to solve equations known as a Hénon map.

For the speech recognition task, the organoids had to learn to recognise the voice of one individual from a set of 240 audio clips of eight people pronouncing Japanese vowel sounds. The clips were sent to the organoids as sequences of signals arranged in spatial patterns.

The organoids’ initial responses had an accuracy of around 30 to 40 per cent, says Guo. After training sessions over two days, their accuracy rose to 70 to 80 per cent.

“We call this adaptive learning,” he says. If the organoids were exposed to a drug that stopped new connections forming between nerve cells, there was no improvement.

The training simply involved repeating the audio clips, and no form of feedback was provided to tell the organoids if they were right or wrong, says Guo. This is what is known in AI research as unsupervised learning.

There are two big challenges with conventional AI, says Guo. One is its high energy consumption. The other is the inherent limitations of silicon chips, such as their separation of information and processing.

Guo’s team is one of several groups exploring whether biocomputing using living nerve cells can help overcome these challenges. For instance, a company called Cortical Labs in Australia has been teaching brain cells how to play Pong, New Scientist revealed in 2021.

Titouan Parcollet at the University of Cambridge, who works on conventional speech recognition, doesn’t rule out a role for biocomputing in the long run.

“However, it might also be a mistake to think that we need something like the brain to achieve what deep learning is currently doing,” says Parcollet. “Current deep-learning models are actually much better than any brain on specific and targeted tasks.”

Guo and his team’s task is so simplified that it is only identifies who is speaking, not what the speech is, he says. “The results aren’t really promising from the speech recognition perspective.”

Even if the performance of Brainoware can be improved, another major issue with it is that the organoids can only be maintained for one or two months, says Guo. His team is working on extending this.

“If we want to harness the computation power of organoids for AI computing, we really need to address those limitations,” he says.

Source: AI made from living human brain cells performs speech recognition | New Scientist

Yes, this article bangs on about limitations, but it’s pretty bizarre science this, using a brain to do AI

AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI

IBM and Meta Launch the AI Alliance in collaboration with over 50 Founding Members and Collaborators globally including AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, Yale University and others

[…]

While there are many individual companies, start-ups, researchers, governments, and others who are committed to open science and open technologies and want to participate in the new wave of AI innovation, more collaboration and information sharing will help the community innovate faster and more inclusively, and identify specific risks and mitigate those risks before putting a product into the world.

[..]

We are:

  • The creators of the tooling driving AI benchmarking, trust and validation metrics and best practices, and application creation such as MLPerf, Hugging Face, LangChain, LlamaIndex, and open-source AI toolkits for explainability

, privacy, adversarial robustness, and fairness evaluation

  • .
  • The universities and science agencies that educate and support generation after generation of AI scientists and engineers and push the frontiers of AI research through open science.
  • The builders of the hardware and infrastructure that supports AI training and applications – from the needed GPUs to custom AI accelerators and cloud platforms;
  • The champions of frameworks that drive platform software including PyTorch, Transformers, Diffusers, Kubernetes, Ray, Hugging Face Text generation inference      and Parameter Efficient Fine Tuning.
  • The creators of some of today’s most used open models including Llama2, Stable Diffusion, StarCoder, Bloom, and many others.

[…]

To learn more about the Alliance, visit here: https://thealliance.ai

[…]

Source: AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI

We will see – I don’t see any project pages on this quite yet. But this looks like a reasonable idea.

AI can tell which chateau Bordeaux wines come from with 100% accuracy

Alexandre Pouget at the University of Geneva, Switzerland, and his colleagues used machine learning to analyse the chemical composition of 80 red wines from 12 years between 1990 and 2007. All the wines came from seven wine estates in the Bordeaux region of France.

“We were interested in finding out whether there is a chemical signature that is specific to each of those chateaux that’s independent of vintage,” says Pouget, meaning one estate’s wines would have a very similar chemical profile, and therefore taste, year after year.

To do this, Pouget and his colleagues used a machine to vaporise each wine and separate it into its chemical components. This technique gave them a readout for each wine, called a chromatogram, with about 30,000 points representing different chemical compounds.

The researchers used 73 of the chromatograms to train a machine learning algorithm, along with data on the chateaux of origin and the year. Then they tested the algorithm on the seven chromatograms that had been held back.

They repeated the process 50 times, changing the wines used each time. The algorithm correctly guessed the chateau of origin 100 per cent of the time. “Not that many people in the world will be able to do this,” says Pouget. It was also about 50 per cent accurate at guessing the year when the wine was made.

The algorithm could even guess the estate when it was trained using just 5 per cent of each chromatogram, using portions where there are no notable peaks in chemicals visible to the naked eye, says Pouget.

This shows that a wine’s unique taste and feel in the mouth doesn’t depend on a handful of key molecules, but rather on the overall concentration of many, many molecules, says Pouget.

By plotting the chromatogram data, the algorithm could also separate the wines into groups that were more like each other. It grouped those on the right bank of the river Garonne – called Pomerol and St-Emilion wines – separately from those from left-bank estates, known as Medoc wines.

The work is further evidence that local geography, climate, microbes and wine-making practices, together known as the terroir, do give a unique flavour to a wine. Which precise chemicals are behind each wine wasn’t looked at in this study, however.

“It really is coming close to proof that the place of growing and making really does have a chemical signal for individual wines or chateaux,” says Barry Smith at the University of London’s School of Advanced Study. “The chemicals compounds and their similarities and differences reflect that elusive concept of terroir.”

 

Journal reference:

Communications Chemistry DOI: 10.1038/s42004-023-01051-9

Source: AI can tell which chateau Bordeaux wines come from with 100% accuracy | New Scientist

Brazillian city enacts an ordinance that was written by ChatGPT – might be first law entered by AI

City lawmakers in Brazil have enacted what appears to be the nation’s first legislation written entirely by artificial intelligence — even if they didn’t know it at the time.

The experimental ordinance was passed in October in the southern city of Porto Alegre and city councilman Ramiro Rosário revealed this week that it was written by a chatbot, sparking objections and raising questions about the role of artificial intelligence in public policy.

Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.

“If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.

“It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,” he added.

[…]

“We want work that is ChatGPT generated to be watermarked,” he said, adding that the use of artificial intelligence to help draft new laws is inevitable. “I’m in favor of people using ChatGPT to write bills as long as it’s clear.”

There was no such transparency for Rosário’s proposal in Porto Alegre. Sossmeier said Rosário did not inform fellow council members that ChatGPT had written the proposal.

Keeping the proposal’s origin secret was intentional. Rosário told the AP his objective was not just to resolve a local issue, but also to spark a debate. He said he entered a 49-word prompt into ChatGPT and it returned the full draft proposal within seconds, including justifications.

[…]

And the council president, who initially decried the method, already appears to have been swayed.

“I changed my mind,” Sossmeier said. “I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”

Source: Brazillian city enacts an ordinance that was written by ChatGPT | AP News

One AI image needs as much power as a smartphone charge

In a paper released on arXiv last week, a team of researchers from Hugging Face and Carnegie Mellon University calculated the amount of power AI systems use when asked to perform different tasks.

After asking AIs to perform 1,000 inferences for each task, the researchers found text-based AI tasks are more energy-efficient than jobs involving images.

Text generation consumed 0.042kWh while image generation required 1.35kWh. The boffins assert that charging a smartphone requires 0.012kWh – making image generation a very power-hungry application.

“The least efficient image generation model uses as much energy as 950 smartphone charges (11.49kWh), or nearly one charge per image generation,” the authors wrote, noting the “large variation between image generation models, depending on the size of image that they generate.”

The authors also measured carbon dioxide created by different AI workloads. As depicted in the graphic below, image creation topped that chart

screenshot_graph

Click to enlarge

You can read the full paper here [PDF].

Source: One AI image needs as much power as a smartphone charge • The Register

A(I) deal at any cost: Will the EU buckle to Big Tech?

Would you trust Elon Musk with your mortgage? Or Big Tech with your benefits?

Us neither.

That’s what’s at stake as the EU’s Artificial Intelligence Act reaches the final stage of negotiations. For all its big talk, it seems like the EU is buckling to Big Tech.

EU lawmakers have been tasked with developing the world’s first comprehensive law to regulate AI products. Now that AI systems are already being used in public life, lawmakers are rushing to catch up.

[…]

The principle of precaution urges us to exercise care and responsibility in the face of potential risks. It is crucial not only to foster innovation but also to prevent the unchecked expansion of AI from jeopardising justice and fundamental rights.

At the Left in the European Parliament, we called for this principle to be applied to the AI Act. Unfortunately, other political groups disagreed, prioritising the interests of Big Tech over those of the people. They settled on a three-tiered approach to risk whereby products are categorised into those that do not pose a significant risk, those that are high risk and those that are banned.

However, this approach contains a major loophole that risks undermining the entire legislation.

Like asking a tobacco company whether smoking is risky

When it was first proposed, the Commission outlined a list of ‘high-risk uses’ of AI, including AI systems used to select students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who can access welfare benefits.

Using AI in these assessments has significant real-life consequences. It can mean the difference between being accepted or rejected to university, being able to take out a loan or even being able to access welfare to pay bills, rent or put food on the table.

Under the three-tiered approach, AI developers are allowed to decide themselves whether their product is high-risk. The self-assessment loophole means the developers themselves get to determine whether their systems are high risk akin to a tobacco company deciding cigarettes are safe for our health, or a fossil fuel company saying its fumes don’t harm the environment.

[…]

Experience shows us that when corporations have this kind of freedom, they prioritise their profits over the interests of people and the planet. If the development of AI is to be accountable and transparent, negotiators must eliminate provisions on self-assessment.

AI gives us the opportunity to change our lives for the better. But as long as we let big corporations make the rules, we will continue to replicate inequalities that are already ravaging our societies.

Source: A(I) deal at any cost: Will the EU buckle to Big Tech? – EURACTIV.com

OK, so this seems to be a little breathless – surely we can put in a mechanism for EU checking of risk level when notified of a potential breech, including harsh penalties for misclassifying an AI?

However, the discussions around the EU AI Act – which had the potential to be one of the first and best pieces of regulation on the planet – has now descended into farce since ChatGPT and some strange idea that the original act did not have any provisions for General Purpose / Foundational AI models (it did – they were high risk models). The silly induced discussions this has provoked has only served to delay the AI act coming into force for over a year – something that big businesses are very very happy to see.

A new way to predict ship-killing rogue waves, more importantly: to see how an AI finds its results

[…]

In a paper in Proceedings of the National Academy of Sciences, a group of researchers led by Dion Häfner, a computer scientist at the University of Copenhagen, describe a clever way to make AI more understandable. They have managed to build a neural network, use it to solve a tricky problem, and then capture its insights in a relatively simple five-part equation that human scientists can use and understand.

The researchers were investigating “rogue waves”, those that are much bigger than expected given the sea conditions in which they form. Maritime lore is full of walls of water suddenly swallowing ships. But it took until 1995 for scientists to measure such a wave—a 26-metre monster, amid other waves averaging 12 metres—off the coast of Norway, proving these tales to be tall only in the literal sense.

[…]

To produce something a human could follow, the researchers restricted their neural network to around a dozen inputs, each based on ocean-wave maths that scientists had already worked out. Knowing the physical meaning of each input meant the researchers could trace their paths through the network, helping them work out what the computer was up to.

The researchers trained 24 neural networks, each combining the inputs in different ways. They then chose the one that was the most consistent at making accurate predictions in a variety of circumstances, which turned out to rely on only five of the dozen inputs.

To generate a human-comprehensible equation, the researchers used a method inspired by natural selection in biology. They told a separate algorithm to come up with a slew of different equations using those five variables, with the aim of matching the neural network’s output as closely as possible. The best equations were mixed and combined, and the process was repeated. The result, eventually, was an equation that was simple and almost as accurate as the neural network. Both predicted rogue waves better than existing models.

The first part of the equation rediscovered a bit of existing theory: it is an approximation of a well-known equation in wave dynamics. Other parts included some terms that the researchers suspected might be involved in rogue-wave formation but are not in standard models. There were some puzzlers, too: the final bit of the equation includes a term that is inversely proportional to how spread out the energy of the waves is. Current human theories include a second variable that the machine did not replicate. One explanation is that the network was not trained on a wide enough selection of examples. Another is that the machine is right, and the second variable is not actually necessary.

Better methods for predicting rogue waves are certainly useful: some can sink even the biggest ships. But the real prize is the visibility that Dr Häfner’s approach offers into what the neural network was doing. That could give scientists ideas for tweaking their own theories—and should make it easier to know whether to trust the computer’s predictions.

Source: A new way to predict ship-killing rogue waves

The AI startup behind Stable Diffusion is now testing generative video

Stable Diffusion’s generative art can now be animated, developer Stability AI announced. The company has released a new product called Stable Video Diffusion into a research preview, allowing users to create video from a single image. “This state-of-the-art generative AI video model represents a significant step in our journey toward creating models for everyone of every type,” the company wrote.

The new tool has been released in the form of two image-to-video models, each capable of generating 14 to 25 frames long at speeds between 3 and 30 frames per second at 576 × 1024 resolution.

[…]

Stable Video Diffusion is available only for research purposes at this point, not real-world or commercial applications. Potential users can sign up to get on a waitlist for access to an “upcoming web experience featuring a text-to-video interface,” Stability AI wrote. The tool will showcase potential applications in sectors including advertising, education, entertainment and more.

[…]

it has some limitations, the company wrote: it generates relatively short video (less than 4 seconds), lacks perfect photorealism, can’t do camera motion except slow pans, has no text control, can’t generate legible text and may not generate people and faces properly.

The tool was trained on a dataset of millions of videos and then fine-tuned on a smaller set, with Stability AI only saying that it used video that was publicly available for research purposes.

[…]

Source: The AI startup behind Stable Diffusion is now testing generative video

Sarah Silverman’s retarded AI Case Isn’t Going Very Well Either

Just a few weeks ago Judge William Orrick massively trimmed back the first big lawsuit that was filed against generative AI companies for training their works on copyright-covered materials. Most of the case was dismissed, and what bits remained may not last much longer. And now, it appears that Judge Vince Chhabria (who has been very good on past copyright cases) seems poised to do the same.

This is the high profile case brought by Sarah Silverman and some other authors, because some of the training materials used by OpenAI and Meta included their works. As we noted at the time, that doesn’t make it copyright infringing, and it appears the judge recognizes the large hill Silverman and the other authors have to climb here:

U.S. District Judge Vince Chhabria said at a hearing that he would grant Meta’s motion to dismiss the authors’ allegations that text generated by Llama infringes their copyrights. Chhabria also indicated that he would give the authors permission to amend most of their claims.

Meta has not yet challenged the authors’ central claim in the case that it violated their rights by using their books as part of the data used to train Llama.

“I understand your core theory,” Chhabria told attorneys for the authors. “Your remaining theories of liability I don’t understand even a little bit.”

Chhabria (who you may recall from the time he quashed the ridiculous copyright subpoena that tried to abuse copyright law to expose whoever exposed a billionaire’s mistress) seems rightly skeptical that just because ChatGPT can give you a summary of Silverman’s book that it’s somehow infringing:

“When I make a query of Llama, I’m not asking for a copy of Sarah Silverman’s book – I’m not even asking for an excerpt,” Chhabria said.

The authors also argued that Llama itself is an infringing work. Chhabria said the theory “would have to mean that if you put the Llama language model next to Sarah Silverman’s book, you would say they’re similar.”

“That makes my head explode when I try to understand that,” Chhabria said.

It’s good to see careful judges like Chhabria and Orrick getting into the details here. Of course, with so many of these lawsuits being filed, I’m still worried that some judge is going to make a mess of things, but we’ll see what happens.

Source: Sarah Silverman’s AI Case Isn’t Going Very Well Either | Techdirt

“Make It Real” AI prototype wows UI devs by turning drawings into working software

collaborative whiteboard app maker called “tldraw” made waves online by releasing a prototype of a feature called “Make it Real” that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI’s GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout.

“I think I need to go lie down,” posted designer Kevin Cannon at the start of a viral X thread that featured the creation of functioning sliders that rotate objects on screen, an interface for changing object colors, and a working game of tic-tac-toe. Soon, others followed with demonstrations of drawing a clone of Breakout, creating a working dial clock that ticks, drawing the snake game, making a Pong game, interpreting a visual state chart, and much more.

Users can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk. If others intercept your API key, they could use it to rack up a very large bill in your name (OpenAI charges by the amount of data moving into and out of its API). Those technically inclined can run the code locally, but it will still require OpenAI API access.

Tldraw, developed by Steve Ruiz in London, is an open source collaborative whiteboard tool. It offers a basic infinite canvas for drawing, text, and media without requiring a login. Launched in 2021, the project received $2.7 million in seed funding and is supported by GitHub sponsors. When The GPT-4V API launched recently, Ruiz integrated a design prototype called “draw-a-ui” created by Sawyer Hood to bring the AI-powered functionality into tldraw.

GPT-4V is a version of OpenAI’s large language model that can interpret visual images and use them as prompts.  As AI expert Simon Willison explains on X, Make it Real works by “generating a base64 encoded PNG of the drawn components, then passing that to GPT-4 Vision” with a system prompt and instructions to turn the image into a file using Tailwind. In fact, here is the full system prompt that tells GPT-4V how to handle the inputs and turn them into functioning code:

const systemPrompt = ‘You are an expert web developer who specializes in tailwind css.
A user will provide you with a low-fidelity wireframe of an application.
You will return a single html file that uses HTML, tailwind css, and JavaScript to create a high fidelity website.
Include any extra CSS and JavaScript in the html file.
If you have any images, load them from Unsplash or use solid colored rectangles.
The user will provide you with notes in blue or red text, arrows, or drawings.
The user may also include images of other websites as style references. Transfer the styles as best as you can, matching fonts / colors / layouts.
They may also provide you with the html of a previous design that they want you to iterate from.
Carry out any changes they request from you.
In the wireframe, the previous design’s html will appear as a white rectangle.
Use creative license to make the application more fleshed out.
Use JavaScript modules and unpkg to import any necessary dependencies.’

As more people experiment with GPT-4V and combine it with other frameworks, we’ll likely see more novel applications of OpenAI’s vision-parsing technology emerging in the weeks ahead. Also on Wednesday, a developer used the GPT-4V API to create a live, real-time narration of a video feed by a fake AI-generated David Attenborough voice, which we have covered separately.

For now, it feels like we’ve been given a preview of a possible future mode of software development—or interface design, at the very least—where creating a working prototype is as simple as making a visual mock-up and having an AI model do the rest.

Source: “Make It Real” AI prototype wows devs by turning drawings into working software | Ars Technica

AI weather forecaster complements traditional models very well

Global medium-range weather forecasting is critical to decision-making across many social and economic domains. Traditional numerical weather prediction uses increased compute resources to improve forecast accuracy, but does not directly use historical weather data to improve the underlying model. Here, we introduce “GraphCast,” a machine learning-based method trained directly from reanalysis data. It predicts hundreds of weather variables, over 10 days at 0.25° resolution globally, in under one minute. GraphCast significantly outperforms the most accurate operational deterministic systems on 90% of 1380 verification targets, and its forecasts support better severe event prediction, including tropical cyclones tracking, atmospheric rivers, and extreme temperatures. GraphCast is a key advance in accurate and efficient weather forecasting, and helps realize the promise of machine learning for modeling complex dynamical systems.

[…]

The dominant approach for weather forecasting today is “numerical weather prediction” (NWP), which involves solving the governing equations of weather using supercomputers.

[…]

NWP methods are improved by highly trained experts innovating better models, algorithms, and approximations, which can be a time-consuming and costly process.
Machine learning-based weather prediction (MLWP) offers an alternative to traditional NWP, where forecast models can be trained from historical data, including observations and analysis data.
[…]
In medium-range weather forecasting, i.e., predicting atmospheric variables up to 10 days ahead, NWP-based systems like the IFS are still most accurate. The top deterministic operational system in the world is ECMWF’s High RESolution forecast (HRES), a configuration of IFS which produces global 10-day forecasts at 0.1° latitude/longitude resolution, in around an hour
[…]
Here we introduce an MLWP approach for global medium-range weather forecasting called “GraphCast,” which produces an accurate 10-day forecast in under a minute on a single Google Cloud TPU v4 device, and supports applications including predicting tropical cyclone tracks, atmospheric rivers, and extreme temperatures.
[…]
A single weather state is represented by a 0.25° latitude/longitude grid
[…]
GraphCast is implemented as a neural network architecture, based on GNNs in an “encode-process-decode” configuration (13, 17), with a total of 36.7 million parameters (code, weights and demos can be found at https://github.com/deepmind/graphcast).
[…]
During model development, we used 39 years (1979–2017) of historical data from ECMWF’s ERA5 (21) reanalysis archive.
[…]
Of the 227 variable and level combinations predicted by GraphCast at each grid point, we evaluated its skill versus HRES on 69 of them, corresponding to the 13 levels of WeatherBench (8) and variables (23) from the ECMWF Scorecard (24)
[…]
We find that GraphCast has greater weather forecasting skill than HRES when evaluated on 10-day forecasts at a horizontal resolution of 0.25° for latitude/longitude and at 13 vertical levels.
[NOTE HRES has a resolution of 0.1°]
[…]
We also compared GraphCast’s performance to the top competing ML-based weather model, Pangu-Weather (16), and found GraphCast outperformed it on 99.2% of the 252 targets they presented (see supplementary materials section 6 for details).
[…]
GraphCast’s forecast skill and efficiency compared to HRES shows MLWP methods are now competitive with traditional weather forecasting methods
[…]
With 36.7 million parameters, GraphCast is a relatively small model by modern ML standards, chosen to keep the memory footprint tractable. And while HRES is released on 0.1° resolution, 137 levels, and up to 1 hour time steps, GraphCast operated on 0.25° latitude-longitude resolution, 37 vertical levels, and 6 hour time steps, because of the ERA5 training data’s native 0.25° resolution, and engineering challenges in fitting higher resolution data on hardware.
[…]
Our approach should not be regarded as a replacement for traditional weather forecasting methods, which have been developed for decades, rigorously tested in many real-world contexts, and offer many features we have not yet explored. Rather our work should be interpreted as evidence that MLWP is able to meet the challenges of real-world forecasting problems and has potential to complement and improve the current best methods.
[…]

Source: Learning skillful medium-range global weather forecasting | Science

Brave rivals Bing and ChatGPT with new privacy-focused AI chatbot

Brave, the privacy-focused browser that automatically blocks unwanted ads and trackers, is rolling out Leo — a native AI assistant that the company claims provides “unparalleled privacy” compared to some other AI chatbot services. Following several months of testing, Leo is now available to use for free by all Brave desktop users running version 1.60 of the web browser. Leo is rolling out “in phases over the next few days” and will be available on Android and iOS “in the coming months.”

The core features of Leo aren’t too dissimilar from other AI chatbots like Bing Chat and Google Bard: it can translate, answer questions, summarize webpages, and generate new content. Brave says the benefits of Leo over those offerings are that it aligns with the company’s focus on privacy — conversations with the chatbot are not recorded or used to train AI models, and no login information is required to use it. As with other AI chatbots, however, Brave claims Leo’s outputs should be “treated with care for potential inaccuracies or errors.”

[…]

Source: Brave rivals Bing and ChatGPT with new privacy-focused AI chatbot – The Verge

EU Parliament Fails To Understand That The Right To Read Is The Right To Train. Understands the copyright lobby has money though.

Walled Culture recently wrote about an unrealistic French legislative proposal that would require the listing of all the authors of material used for training generative AI systems. Unfortunately, the European Parliament has inserted a similarly impossible idea in its text for the upcoming Artificial Intelligence (AI) Act. The DisCo blog explains that MEPs added new copyright requirements to the Commission’s original proposal:

These requirements would oblige AI developers to disclose a summary of all copyrighted material used to train their AI systems. Burdensome and impractical are the right words to describe the proposed rules.

In some cases it would basically come down to providing a summary of half the internet.

Leaving aside the impossibly large volume of material that might need to be summarized, another issue is that it is by no means clear when something is under copyright, making compliance even more infeasible. In any case, as the DisCo post rightly points out, the EU Copyright Directive already provides a legal framework that addresses the issue of training AI systems:

The existing European copyright rules are very simple: developers can copy and analyse vast quantities of data from the internet, as long as the data is publicly available and rights holders do not object to this kind of use. So, rights holders already have the power to decide whether AI developers can use their content or not.

This is a classic case of the copyright industry always wanting more, no matter how much it gets. When the EU Copyright Directive was under discussion, many argued that an EU-wide copyright exception for text and data mining (TDM) and AI in the form of machine learning would be hugely beneficial for the economy and society. But as usual, the copyright world insisted on its right to double dip, and to be paid again if copyright materials were used for mining or machine learning, even if a license had already been obtained to access the material.

As I wrote in a column five years ago, that’s ridiculous, because the right to read is the right to mine. Updated for our AI world, that can be rephrased as “the right to read is the right to train”. By failing to recognize that, the European Parliament has sabotaged its own AI Act. Its amendment to the text will make it far harder for AI companies to thrive in the EU, which will inevitably encourage them to set up shop elsewhere.

If the final text of the AI Act still has this requirement to provide a summary of all copyright material that is used for training, I predict that the EU will become a backwater for AI. That would be a huge loss for the region, because generative AI is widely expected to be one of the most dynamic and important new tech sectors. If that happens, backward-looking copyright dogma will once again have throttled a promising digital future, just as it has done so often in the recent past.

Source: EU Parliament Fails To Understand That The Right To Read Is The Right To Train | Techdirt

Judge dismisses most of artists’ AI copyright lawsuits against Midjourney, Stability AI

judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies’ generative artificial intelligence systems.

U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. The judge said the artists could file an amended complaint against the two companies, whose systems utilize Stability’s Stable Diffusion text-to-image technology.

Orrick also dismissed McKernan and Ortiz’s copyright infringement claims entirely. The judge allowed Andersen to continue pursuing her key claim that Stability’s alleged use of her work to train Stable Diffusion infringed her copyrights.

The same allegation is at the heart of other lawsuits brought by artists, authors and other copyright owners against generative AI companies.

“Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture,” Orrick said.

The artists’ attorneys Joseph Saveri and Matthew Butterick said in a statement that their “core claim” survived, and that they were confident that they could address the court’s concerns about their other claims in an amended complaint to be filed next month.

A spokesperson for Stability declined to comment on the decision. Representatives for Midjourney and DeviantArt did not immediately respond to requests for comment.

The artists said in their January complaint that Stability used billions of images “scraped” from the internet, including theirs, without permission to teach Stable Diffusion to create its own images.

Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists’ copyrights. He allowed the claims to be amended but said he was “not convinced” that allegations based on the systems’ output could survive without showing that the images were substantially similar to the artists’ work.

The judge also dismissed other claims from the artists, including that the companies violated their publicity rights and competed with them unfairly, with permission to refile.

Orrick dismissed McKernan and Ortiz’s copyright claims because they had not registered their images with the U.S. Copyright Office, a requirement for bringing a copyright lawsuit.

The case is Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.

For the artists: Joseph Saveri of Joseph Saveri Law Firm; and Matthew Butterick

For Stability: Paul Schoenhard of Fried Frank Harris Shriver & Jacobson

For Midjourney: Angela Dunning of Cleary Gottlieb Steen & Hamilton

For DeviantArt: Andy Gass of Latham & Watkins

Read more:

Lawsuits accuse AI content creators of misusing copyrighted work

AI companies ask U.S. court to dismiss artists’ copyright lawsuit

US judge finds flaws in artists’ lawsuit against AI companies

Source: Judge pares down artists’ AI copyright lawsuit against Midjourney, Stability AI | Reuters

These suits are absolute nonsense. It’s like suing a person for having seen some art and made something a bit like it. It’s not very surprising that this has been wiped off the table.

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security

Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

IBM chip speeds up AI by combining processing and memory in the core

 

Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”

NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.

[…]

NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.

The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.

[…]

NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.

Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”

[…]

Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.

Source: ‘Mind-blowing’ IBM chip speeds up AI

Google’s AI stoplight program leads to less stops, less emissions

It’s been two years since Google first debuted Project Green Light, a novel means of addressing the street-level pollution caused by vehicles idling at stop lights.

[…]

Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there.

[…]

When the program was first announced in 2021, it had only been pilot tested in four intersections in Israel in partnership with the Israel National Roads Company but Google had reportedly observed a “10 to 20 percent reduction in fuel and intersection delay time” during those tests. The pilot program has grown since then, spreading to a dozen partner cities around the world, including Rio de Janeiro, Brazil; Manchester, England and Jakarta, Indonesia.

“Today we’re happy to share that… we plan to scale to more cities in 2024,” Yael Maguire, Google VP of Geo Sustainability, told reporters during a pre-brief event last week. “Early numbers indicate a potential for us to see a 30 percent reduction in stops.

[…]

“Our AI recommendations work with existing infrastructure and traffic systems,” Maguire continued. “City engineers are able to monitor the impact and see results within weeks.” Maguire also noted that the Manchester test reportedly saw improvements to emission levels and air quality rise by as much as 18 percent. The company also touted the efficacy of its Maps routing in reducing emissions, with Maguire pointing out at it had “helped prevent more than 2.4 million metric tons of carbon emissions — the equivalent of taking about 500,000 fuel-based cars off the road for an entire year.”

Source: Google’s AI stoplight program is now calming traffic in a dozen cities worldwide

Adobe previews AI upscaling to make blurry videos and GIFs look fresh

Adobe has developed an experimental AI-powered upscaling tool that greatly improves the quality of low-resolution GIFs and video footage. This isn’t a fully-fledged app or feature yet, and it’s not yet available for beta testing, but if the demonstrations seen by The Verge are anything to go by then it has some serious potential.

Adobe’s “Project Res-Up” uses diffusion-based upsampling technology (a class of generative AI that generates new data based on the data it’s trained on) to increase video resolution while simultaneously improving sharpness and detail.

In a side-by-side comparison that shows how the tool can upscale video resolution, Adobe took a clip from The Red House (1947) and upscaled it from 480 x 360 to 1280 x 960, increasing the total pixel count by 675 percent. The resulting footage was much sharper, with the AI removing most of the blurriness and even adding in new details like hair strands and highlights. The results still carried a slightly unnatural look (as many AI video and images do) but given the low initial video quality, it’s still an impressive leap compared to the upscaling on Nvidia’s TV Shield or Microsoft’s Video Super Resolution.

The footage below provided by Adobe matches what I saw in the live demonstration:

A clip from a black and white movie called The Red House (1947) featuring a young man and woman.
[Left: original, Right: upscaled] Running this clip from The Red House (1947) through Project Res-Up removes most of the blur and makes details like the character’s hair and eyes much sharper.Image: The Red House (1947) / United Artists / Adobe

Another demonstration showed a video being cropped to focus on a baby elephant, with the upscaling tool similarly boosting the low-resolution crop and eradicating most of the blur while also adding little details like skin wrinkles. It really does look as though the tool is sharpening low-contrast details that can’t be seen in the original footage. Impressively, the artificial wrinkles move naturally with the animal without looking overly artificial. Adobe also showed Project Res-Up upscaling GIFs to breathe some new life into memes you haven’t used since the days of MySpace.

A side-by-side comparison of baby elephant video footage.
[Left: original, Right: upscaled] Additional texture has been applied to this baby elephant to make the upscaled footage appear more natural and lifelike.Image: Adobe

The project will be revealed during the “Sneaks” section of the Adobe Max event later today, which the creative software giant uses to showcase future technologies and ideas that could potentially join Adobe’s product lineup. That means you won’t be able to try out Project Res-Up on your old family videos (yet) but its capabilities could eventually make their way into popular editing apps like Adobe Premiere Pro or Express. Previous Adobe Sneaks have since been released as apps and features, like Adobe Fresco and Photoshop’s content-aware tool.

Source: Adobe previews AI upscaling to make blurry videos and GIFs look fresh – The Verge

New Fairy Circles Identified at Hundreds of Sites Worldwide

Round discs of dirt known as “fairy circles” mysteriously appear like polka dots on the ground that can spread out for miles. The origins of this phenomenon has intrigued scientists for decades, with recent research indicating that they may be more widespread than previously thought.

AI Model Used to Identify New Fairy Circles Worldwide N. Juergens:AAAS:Science
Fairy circles in NamibRand Nature Reserve in Namibia; Photo: N. Juergens/AAAS/Science

Fairy circles have previously been sighted only in Southern Africa’s Namid Desert and the outback of Western Australia. A new study was recently published which used artificial intelligence to identify vegetation patterns resembling fairy circles in hundreds of new locations across 15 countries on 3 continents.

Published in the journal Proceedings of the National Academy of Sciences, the new survey analyzed datasets containing high-resolution satellite images of drylands and arid ecosystems with scant rainfall from around the world.

Examining the new findings may help scientists understand fairy circles and the origins of their formations on a global scale. The researchers searched for patterns resembling fairy circles using a neural network or a type of AI that processes information in a manner that’s similar to the human brain.

“The use of artificial intelligence based models on satellite imagery is the first time it has been done on a large scale to detect fairy-circle like patterns,” said lead study author Dr. Emilio Guirado, a data scientist with the Multidisciplinary Institute for Environmental Studies at the University of Alicante in Spain.

Fairy Circles Identified at Sites Worldwide Courtesy Dr. Stephan Getzin
Drone flies over the NamibRand Nature Reserve; Photo: Dr. Stephan Getzin

The scientists first trained the neural network to recognize fairy circles by inputting more than 15,000 satellite images taken over Nambia and Australia. Then they provided the AI dataset with satellite views of nearly 575,000 plots of land worldwide, each measuring approximately 2.5 acres.

The neural network scanned vegetation in those images and identified repeating circular patterns that resembled fairy circles, evaluating the circles’ shapes, sizes, locations, pattern densities, and distribution. The output was then reviewed by humans to double-check the work of the neural network.

“We had to manually discard some artificial and natural structures that were not fairy circles based on photo-interpretation and the context of the area,” Guirado explained.

The results of the study showed 263 dryland locations that contained circular patterns similar to the fairy circles in Namibia and Australia. The spots were located in Africa, Madagascar, Midwestern Asia, and both central and Southwest Australia.

Researchers Discover New Fairy Circles Around the World Thomas Dressler:imageBROKER:Shutterstock
New fairy circles identified around the world; Photo: Dressler/imageBROKER/Shutterstock

The authors of the study also collected environmental data where the new circles were identified in hopes that this may indicate what causes them to form. They determined that fairy circle-like patterns were most likely to occur in dry, sandy soils that were high-alkaline and low in nitrogen.  They also found that these patterns helped stabilize ecosystems, increasing an area’s resistance to disturbances such as extreme droughts and floods.

There are many different theories among experts regarding the creation of fairy circles. They may be caused by certain climate conditions, self-organization in plants, insect activity, etc. The authors of the new study are optimistic that the new findings will help unlock the mysteries of this unique phenomenon.

Source: New Fairy Circles Identified at Hundreds of Sites Worldwide – TOMORROW’S WORLD TODAY®