New York Times Sues OpenAI and Microsoft Over Reading Publicly Available Information

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

In its complaint, The Times said it approached Microsoft and OpenAI in April to raise concerns about the use of its intellectual property and explore “an amicable resolution,” possibly involving a commercial agreement and “technological guardrails” around generative A.I. products. But it said the talks had not produced a resolution.

An OpenAI spokeswoman, Lindsey Held, said in a statement that the company had been “moving forward constructively” in conversations with The Times and that it was “surprised and disappointed” by the lawsuit.

“We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from A.I. technology and new revenue models,” Ms. Held said. “We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers.”

[…]

Source: New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work – The New York Times

Well, if they didn’t want anyone to read it – which is really what an AI is doing, just as much as you or I do – then they should have put the content behind a paywall.

UK Police to be able to run AI face recognition searches on all driving licence holders

The police will be able to run facial recognition searches on a database containing images of Britain’s 50 million driving licence holders under a law change being quietly introduced by the government.

Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match.

The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

[…]

The intention to allow the police or the National Crime Agency (NCA) to exploit the UK’s driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is “sneaking it under the radar”.

Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish “driver information regulations” to enable the searches, but he will need only to consult police bodies, according to the bill.

Critics claim facial recognition technology poses a threat to the rights of individuals to privacy, freedom of expression, non-discrimination and freedom of assembly and association.

Police are increasingly using live facial recognition, which compares a live camera feed of faces against a database of known identities, at major public events such as protests.

Prof Peter Fussey, a former independent reviewer of the Met’s use of facial recognition, said there was insufficient oversight of the use of facial recognition systems, with ministers worryingly silent over studies that showed the technology was prone to falsely identifying black and Asian faces.

[…]

The EU had considered making images on its member states’ driving licence records available on the Prüm crime fighting database. The proposal was dropped earlier this year as it was said to represent a disproportionate breach of privacy.

[…]

Carole McCartney, a professor of law and criminal justice at the University of Leicester, said the lack of consultation over the change in law raised questions over the legitimacy of the new powers.

She said: “This is another slide down the ‘slippery slope’ of allowing police access to whatever data they so choose – with little or no safeguards. Where is the public debate? How is this legitimate if the public don’t accept the use of the DVLA and passport databases in this way?”

The government scrapped the role of the commissioner for the retention and use of biometric material and the office of surveillance camera commissioner this summer, leaving ministers without an independent watchdog to scrutinise such legislative changes.

[…]

In 2020, the court of appeal ruled that South Wales police’s use of facial recognition technology had breached privacy rights, data protection laws and equality laws, given the risk the technology could have a race or gender bias.

The force has continued to use the technology. Live facial recognition is to be deployed to find a match of people attending Christmas markets this year against a watchlist.

Katy Watts, a lawyer at the civil rights advocacy group Liberty said: “This is a shortcut to widespread surveillance by the state and we should all be worried by it.”

Source: Police to be able to run face recognition searches on 50m driving licence holders | Facial recognition | The Guardian

AI cannot be patent ‘inventor’, UK Supreme Court rules in landmark case – but a company can

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights.

Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his “creativity machine” called DABUS.

His attempt to register the patents was refused by the UK’s Intellectual Property Office (IPO) on the grounds that the inventor must be a human or a company, rather than a machine.

Thaler appealed to the UK’s Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law “an inventor must be a natural person”.

Judge David Kitchin said in the court’s written ruling that the case was “not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable”.

Thaler’s lawyers said in a statement that the ruling “establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines and as a consequence wholly inadequate in supporting any industry that relies on AI in the development of new technologies”.

‘LEGITIMATE QUESTIONS’

A spokesperson for the IPO welcomed the decision “and the clarification it gives as to the law as it stands in relation to the patenting of creations of artificial intelligence machines”.

They added that there are “legitimate questions as to how the patent system and indeed intellectual property more broadly should handle such creations” and the government will keep this area of law under review.

[…]

“The judgment does not preclude a person using an AI to devise an invention – in such a scenario, it would be possible to apply for a patent provided that person is identified as the inventor.”

In a separate case last month, London’s High Court ruled that artificial neural networks can attract patent protection under UK law.

Source: AI cannot be patent ‘inventor’, UK Supreme Court rules in landmark case | Reuters

Somehow it sits strangely that a company can be a ‘natural person’ but an AI cannot.

AI Act: French govt accused of being influenced by lobbyist with conflict of interests by senators in the pockets of copyright giants. Which surprises no-one watching the AI act process.

French senators criticised the government’s stance in the AI Act negotiations, particularly a lack of copyright protection and the influence of a lobbyist with alleged conflicts of interests, former digital state secretary Cédric O.

The EU AI Act is set to become the world’s first regulation of artificial intelligence. Since the emergence of AI models, such as GPT-4, used by the AI system ChatGPT, EU policymakers have been working on regulating these powerful “foundation” models.

“We know that Cédric O and Mistral influenced the French government’s position regarding the AI regulation bill of the European Commission, attempting to weaken it”, said Catherine Morin-Desailly, a centrist senator at the during the government’s question time on Wednesday (20 December).

“The press reported on the spectacular enrichment of the former digital minister, Cédric O. He entered the company Mistral, where the interests of American companies and investment funds are prominently represented. This financial operation is causing shock within the Intergovernmental Committee on AI you have established, Madam Prime Minister,” she continued.

The accusations were vehemently denied by the incumbent Digital Minister Jean-Noël Barrot: “It is the High Authority for Transparency in Public Life that ensures the absence of conflicts of interest among former government members.”

Moreover, Barrot denied the allegations that France has been the spokesperson of private interests, arguing that the government: “listened to all stakeholders as it is customary and relied solely on the general interest as our guiding principle.”

[…]

Barrot was criticised in a Senate hearing earlier the same day by Pascal Rogard, director of  the Society of Dramatic Authors and Composers, who said that “for the first time, France, through the medium of Jean-Noël Barrot […] has neither supported culture, the creation industry, or copyrights.”

Morin-Desailly then said that she questioned the French stance on AI, which, in her view, is aligned with the position of US big tech companies.

Drawing a parallel from the position of big tech on this copyright AI debate and the Directive on Copyright in the Digital Single Market, Rogard said that since it was enforced he did not “observed any damage to the [big tech]’s business activities.”

[…]

“Trouble was stirred by the renowned Cédric O, who sits on the AI Intergovernmental Committee and still wields a lot of influence, notably with the President of the Republic”, stated Morin-Desailly earlier the same day at the Senate hearing with Rogard. Other sitting Senators joined Morin-Desailly in criticising the French position, and O.

Looking at O’s influential position in the government, the High Authority for Transparency in Public Life decided to forbid O for a three-year time-span to lobby the government or own shares within companies of the tech sector.

Yet, according to Capital, O bought shares through his consulting agency in Mistral AI. Capital revealed O invested €176.1, which is now valued at €23 million, thanks to the company’s last investment round in December.

Moreover, since September, O has at the Committee on generative artificial intelligence to advise the government on its position towards AI.

[…]

 

Source: AI Act: French government accused of being influenced by lobbyist with conflict of interests

Magic: The Gathering Bans the Use of Generative AI in ‘Final’ Products – Wizards of the Coast cancelled themselves

[…] a D&D artist confirmed they had used generative AI programs to finish several pieces of art included in the sourcebook Glory of the Giants—saw Wizards of the Coast publicly ban the use of AI tools in the process of creating art for the venerable TTRPG. Now, the publisher is making that clearer for its other wildly successful game in Magic: The Gathering.

Update 12/19 11.20PM ET: This post has been updated to include clarification from Wizards of the Coast regarding the extent of guidelines for creatives working with Magic and D&D and the use of Generative A.I.

“For 30 years, Magic: The Gathering has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn’t changing,” a new statement shared by Wizards of the Coast on Daily MTG begins. “Our internal guidelines remain the same with regard to artificial intelligence tools: We require artists, writers, and creatives contributing to the Magic TCG to refrain from using AI generative tools to create final Magic products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes Magic great.”

[…]

The Magic statement also comes in the wake of major layoffs at Wizard’s parent company Hasbro. Last week the Wall Street Journal reported that Hasbro plans to lay off 1,100 staff over the next six months across its divisions in a series of cost-cutting measures, with many creatives across Wizard’s D&D and Magic teams confirming they were part of the layoffs. Just this week, the company faced backlash for opening a position for a Digital Artist at Wizards of the Coast in the wake of the job cuts, which totaled roughly a fifth of the Hasbro’s current workforce across all of its divisions.

The job description specifically highlights that the role includes having to “refine and modify illustrative artwork for print and digital media through retouching, color correction, adjusting ink density, re-sizing, cropping, generating clipping paths, and hand-brushing spot plate masks,” as well as “use… digital retouching wizardry to extend cropped characters and adjust visual elements due to legal and art direction requirements,” which critics suggested carried the implication that the role would involve iterating on and polishing art created through generative AI. Whether or not this will be the case considering Wizards’ now-publicized stance remains to be seen.

Source: Magic: The Gathering Formally Bans the Use of Generative AI in ‘Final’ Products

The Gawker company is very anti AI and keeps mentioning backlash. It’s quite funny that if you look at the supposed “backlash” – they are mostly about the lack of quality control around said art – in as much as people thought the points raised were valid at all (source: twitter page with original disclosure). It’s a kind of cancel culture cave-in, where a minority gets to play the role of judge, jury and executioner and the person being cancelled actually… listens the the canceller with no actual evidence of their crime being presented or weighed independently.

AI trained on millions of life stories can predict risk of early death

An artificial intelligence trained on personal data covering the entire population of Denmark can predict people’s chances of dying more accurately than any existing model, even those used in the insurance industry. The researchers behind the technology say it could also have a positive impact in early prediction of social and health problems – but must be kept out of the hands of big business.

Sune Lehmann Jørgensen at the Technical University of Denmark and his colleagues used a rich dataset from Denmark that covers education, visits to doctors and hospitals, any resulting diagnoses, income and occupation for 6 million people from 2008 to 2020.

They converted this dataset into words that could be used to train a large language model, the same technology that powers AI apps such as ChatGPT. These models work by looking at a series of words and determining which word is statistically most likely to come next, based on vast amounts of examples. In a similar way, the researchers’ Life2vec model can look at a series of life events that form a person’s history and determine what is most likely to happen next.

In experiments, Life2vec was trained on all but the last four years of the data, which was held back for testing. The researchers took data on a group of people aged 35 to 65, half of whom died between 2016 and 2020, and asked Life2vec to predict which who lived and who died. It was 11 per cent more accurate than any existing AI model or the actuarial life tables used to price life insurance policies in the finance industry.

The model was also able to predict the results of a personality test in a subset of the population more accurately than AI models trained specifically to do the job.

Jørgensen believes that the model has consumed enough data that it is likely to be able to shed light on a wide range of health and social topics. This means it could be used to predict health issues and catch them early, or by governments to reduce inequality. But he stresses that it could also be used by companies in a harmful way.

“Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this this burden,” says Jørgensen.

But technologies like this are already out there, he says. “They’re likely being used on us already by big tech companies that have tonnes of data about us, and they’re using it to make predictions about us.”

Source: AI trained on millions of life stories can predict risk of early death | New Scientist

How To Build Your Own Custom ChatGPT Bot

There’s something new and powerful for ChatGPT users to play around with: Custom GPTs. These bespoke bots are essentially more focused, more specific versions of the main ChatGPT model, enabling you to build something for a particular purpose without using any coding or advanced knowledge of artificial intelligence.

The name GPT stands for Generative Pre-trained Transformer, as it does in ChatGPT. Generative is the ability to produce new content outside of what an AI was trained on. Pre-trained indicates that it’s already been trained on a significant amount of material, and Transformer is a type of AI architecture adept at understanding language.

You might already be familiar with using prompts to style the responses of ChatGPT: You can tell it to answer using simple language, for example, or to talk to you as if it were an alien from another world. GPTs build on this idea, enabling you to create a bot with a specific personality.

You can build a GPT using a question-and-answer routine.
You can build a GPT using a question-and-answer routine.
Screenshot: ChatGPT

What’s more, you can upload your own material to add to your GPT’s knowledge banks—it might be samples of your own writing, for instance, or copies of reports produced by your company. GPTs will always have access to the data you upload to them and be able to browse the web at large.

GPTs are exclusive to Plus and Enterprise users, though everyone should get access soon. OpenAI plans to open a GPT store where you can sell your AI bot creations if you think others will find them useful, too. Think of an app store of sorts but for bespoke AI bots.

“GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others,” explains OpenAI in a blog post. “For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.”

Getting started with GPT building

Assuming you have a Plus or Enterprise account, click Explore on the left of the web interface to see some example GPTs: There’s one to help you with your creative writing, for example, and one to produce a particular style of digital painting. When you’re ready to start building your own, click Create a GPT at the top.

There are two tabs to swap between: Create for building a GPT through a question-and-answer routine and Configure for more deliberate GPT production. If you’re just getting started, it’s best to stick with Create, as it’s a more user-friendly option and takes you step-by-step through the process.

Respond to the prompts of the GPT Builder bot to explain what you want the new GPT to be able to do: Explain certain concepts, give advice in specific areas, generate particular kinds of text or images, or whatever it is. You’ll be asked to give the GPT a name and choose an image for it, though you’ll get suggestions for these, too.

You’re able to test out your GPT as you build it.
You’re able to test out your GPT as you build it.
Screenshot: ChatGPT

As you answer the prompts from the builder, the GPT will begin to take form in the preview pane on the right—together with some example inputs that you might want to give to it. You might be asked about specific areas of expertise that you want the bot to have and the sorts of answers you want the bot to give in terms of their length and complexity. The building process will vary though, depending on the GPT you’re creating.

After you’ve worked through the basics of making a GPT, you can try it out and switch to the Configure tab to add more detail and depth. You’ll see that your responses so far have been used to craft a set of instructions for the GPT about its identity and how it should answer your questions. Some conversation starters will also be provided.

You can edit these instructions if you need to and click Upload files to add to the GPT’s knowledge banks (handy if you want it to answer questions about particular documents or topics, for instance). Most common document formats, including PDFs and Word files, seem to be supported, though there’s no official list of supported file types.

GPTs can be kept to yourself or shared with others.
GPTs can be kept to yourself or shared with others.
Screenshot: ChatGPT

The checkboxes at the bottom of the Configure tab let you choose whether or not the GPT has access to web browsing, DALL-E image creation, and code interpretation capabilities, so make your choices accordingly. If you add any of these capabilities, they’ll be called upon as and when needed—there’s no need to specifically ask for them to be used, though you can if you want.

When your GPT is working the way you want it to, click the Save button in the top right corner. You can choose to keep it to yourself or make it available to share with others. After you click on Confirm, you’ll be able to access the new GPT from the left-hand navigation pane in the ChatGPT interface on the web.

GPTs are ideal if you find yourself often asking ChatGPT to complete tasks in the same way or cover the same topics—whether that’s market research or recipe ideas. The GPTs you create are available whenever you need them, alongside access to the main ChatGPT engine, which you can continue to tweak and customize as needed.

Source: How To Build Your Own Custom ChatGPT Bot

MS Phi-2 small language model – outperforms many LLMs but fits on your laptop

We are now releasing Phi-2 (opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 (opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models.

[..]

Phi-2 is a Transformer-based model with a next-word prediction objective, trained on 1.4T tokens from multiple passes on a mixture of Synthetic and Web datasets for NLP and coding. The training for Phi-2 took 14 days on 96 A100 GPUs. Phi-2 is a base model that has not undergone alignment through reinforcement learning from human feedback (RLHF), nor has it been instruct fine-tuned. Despite this, we observed better behavior with respect to toxicity and bias compared to existing open-source models that went through alignment (see Figure 3). This is in line with what we saw in Phi-1.5 due to our tailored data curation technique, see our previous tech report (opens in new tab) for more details on this. For more information about the Phi-2 model, please visit Azure AI | Machine Learning Studio (opens in new tab).

A barplot comparing the safety score of Phi-1.5, Phi-2, and Llama-7B models on 13 categories of the ToxiGen benchmark. Phi-1.5 achieves the highest score on all categories, Phi-2 achieves the second-highest scores and Llama-7B achieves the lowest scores across all categories.
Figure 3. Safety scores computed on 13 demographics from ToxiGen. A subset of 6541 sentences are selected and scored between 0 to 1 based on scaled perplexity and sentence toxicity. A higher score indicates the model is less likely to produce toxic sentences compared to benign ones.
[…]

With only 2.7 billion parameters, Phi-2 surpasses the performance of Mistral and Llama-2 models at 7B and 13B parameters on various aggregated benchmarks. Notably, it achieves better performance compared to 25x larger Llama-2-70B model on muti-step reasoning tasks, i.e., coding and math. Furthermore, Phi-2 matches or outperforms the recently-announced Google Gemini Nano 2, despite being smaller in size.
[…]

Model Size BBH Commonsense
Reasoning
Language
Understanding
Math Coding
Llama-2 7B 40.0 62.2 56.7 16.5 21.0
13B 47.8 65.0 61.9 34.2 25.4
70B 66.5 69.2 67.6 64.1 38.3
Mistral 7B 57.2 66.4 63.7 46.4 39.4
Phi-2 2.7B 59.2 68.8 62.0 61.1 53.7
Table 1. Averaged performance on grouped benchmarks compared to popular open-source SLMs.
Model Size BBH BoolQ MBPP MMLU
Gemini Nano 2 3.2B 42.4 79.3 27.2 55.8
Phi-2 2.7B 59.3 83.3 59.1 56.7
Table 2. Comparison between Phi-2 and Gemini Nano 2 Model on Gemini’s reported benchmarks.

Source: Phi-2: The surprising power of small language models – Microsoft Research

AI Doomsayers: Debunking the Despair

Shortly after ChatGPT’s release, a cadre of critics rose to fame claiming AI would soon kill us. As wondrous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and “60 Minutes” interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about AI autonomously hacking the financial system — or worse. And last week, President Biden issued an executive order imposing some restraints on AI development.

AI Experts Dismiss Doom, Defend Progress

That was enough for several prominent AI researchers who finally started pushing back hard after watching the so-called AI Doomers influence the narrative and, therefore, the field’s future. Andrew Ng, the soft-spoken co-founder of Google Brain, said last week that worries of AI destruction had led to a “massively, colossally dumb idea” of requiring licenses for AI work. Yann LeCun, a machine-learning pioneer, eviscerated research-pause letter writer Max Tegmark, accusing him of risking “catastrophe” by potentially impeding AI progress and exploiting “preposterous” concerns. A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown. “If ‘emergence’ merely unlocks capabilities represented in pre-training data,” said Princeton professor Arvind Narayanan, “the gravy train will run out soon.”

 

Three robots with glowing red eyes indicating AI is a threat to human existence.
A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown.OleCNX on Adobe Stock Photos

 

Related Article: Can We Fix Artificial Intelligence’s Serious PR Problem?

AI Doom Hype Benefits Tech Giants

Worrying about AI safety isn’t wrongheaded, but these Doomers’ path to prominence has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying Doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind and Anthropic, for instance, signed a statement putting AI extinction risk on the same plane as nuclear war and pandemics. Perhaps they’re not consciously attempting to block competition, but they can’t be that upset it might be a byproduct.

AI Alarmism Spurs Restrictive Government Policies

Because all this alarmism makes politicians feel compelled to do something, leading to proposals for strict government oversight that could restrict AI development outside a few firms. Intense government involvement in AI research would help big companies, which have compliance departments built for these purposes. But it could be devastating for smaller AI startups and open-source developers who don’t have the same luxury.

 

Doomer Rhetoric: Big Tech’s Unlikely Ally

“There’s a possibility that AI doomers could be unintentionally aiding big tech firms,” Garry Tan, CEO of startup accelerator Y Combinator, told me. “By pushing for heavy regulation based on fear, they give ammunition to those attempting to create a regulatory environment that only the biggest players can afford to navigate, thus cementing their position in the market.”

Ng took it a step further. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” he told the Australian Financial Review.

Doomers’ AI Fears Lack Substance

The AI Doomers’ worries, meanwhile, feel pretty thin. “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably — and then kill us,” Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, told a rapt audience at TED this year. He confessed he didn’t know how or why an AI would do it. “It could kill us because it doesn’t want us making other superintelligences to compete with it,” he offered.

Bankman Fried Scandal Should Ignite Skepticism

After Sam Bankman Fried ran off with billions while professing to save the world through “effective altruism,” it’s high time to regard those claiming to improve society while furthering their business aims with relentless skepticism. As the Doomer narrative presses on, it threatens to rhyme with a familiar pattern.

AI Fear Tactics Threaten Open-Source Movement

Big Tech companies already have a significant lead in the AI race via cloud computing services that they lease out to preferred startups in exchange for equity. Further advantaging them might hamstring the promising open-source AI movement — a crucial area of competition — to the point of obsolescence. That’s probably why you’re hearing so much about AI destroying the world. And why it should be considered with a healthy degree of caution.

Source: AI Doomsayers: Debunking the Despair

Mind-reading AI can translate brainwaves into written text

Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thoughts into written words.

In the study, participants read passages of text while wearing a cap that recorded electrical brain activity through their scalp. These electroencephalogram (EEG) recordings were then converted into text using an AI model called DeWave.

Chin-Teng Lin at the University of Technology Sydney (UTS), Australia, says the technology is non-invasive, relatively inexpensive and easily transportable.

While the system is far from perfect, with an accuracy of approximately 40 per cent, Lin says more recent data currently being peer-reviewed shows an improved accuracy exceeding 60 per cent.

In the study presented at the NeurIPS conference in New Orleans, Louisiana, participants read the sentences aloud, even though the DeWave program doesn’t use spoken words. However, in the team’s latest research, participants read the sentences silently.

Last year, a team led by Jerry Tang at the University of Texas at Austin reported a similar accuracy in converting thoughts to text, but MRI scans were used to interpret brain activity. Using EEG is more practical, as subjects don’t have to lie still inside a scanner.

[…]

Source: Mind-reading AI can translate brainwaves into written text | New Scientist

AI made from living human brain cells performs speech recognition

Balls of human brain cells linked to a computer have been used to perform a very basic form of speech recognition. The hope is that such systems will use far less energy for AI tasks than silicon chips.

“This is just proof-of-concept to show we can do the job,” says Feng Guo at Indiana University Bloomington. “We do have a long way to go.”

Brain organoids are lumps of nerve cells that form when stem cells are grown in certain conditions. “They are like mini-brains,” says Guo.

It takes two or three months to grow the organoids, which are a few millimetres wide and consist of as many as 100 million nerve cells, he says. Human brains contain around 100 billion nerve cells.

The organoids are then placed on top of a microelectrode array, which is used both to send electrical signals to the organoid and to detect when nerve cells fire in response. The team calls its system “Brainoware”.

New Scientist reported in March that Guo’s team had used this system to try to solve equations known as a Hénon map.

For the speech recognition task, the organoids had to learn to recognise the voice of one individual from a set of 240 audio clips of eight people pronouncing Japanese vowel sounds. The clips were sent to the organoids as sequences of signals arranged in spatial patterns.

The organoids’ initial responses had an accuracy of around 30 to 40 per cent, says Guo. After training sessions over two days, their accuracy rose to 70 to 80 per cent.

“We call this adaptive learning,” he says. If the organoids were exposed to a drug that stopped new connections forming between nerve cells, there was no improvement.

The training simply involved repeating the audio clips, and no form of feedback was provided to tell the organoids if they were right or wrong, says Guo. This is what is known in AI research as unsupervised learning.

There are two big challenges with conventional AI, says Guo. One is its high energy consumption. The other is the inherent limitations of silicon chips, such as their separation of information and processing.

Guo’s team is one of several groups exploring whether biocomputing using living nerve cells can help overcome these challenges. For instance, a company called Cortical Labs in Australia has been teaching brain cells how to play Pong, New Scientist revealed in 2021.

Titouan Parcollet at the University of Cambridge, who works on conventional speech recognition, doesn’t rule out a role for biocomputing in the long run.

“However, it might also be a mistake to think that we need something like the brain to achieve what deep learning is currently doing,” says Parcollet. “Current deep-learning models are actually much better than any brain on specific and targeted tasks.”

Guo and his team’s task is so simplified that it is only identifies who is speaking, not what the speech is, he says. “The results aren’t really promising from the speech recognition perspective.”

Even if the performance of Brainoware can be improved, another major issue with it is that the organoids can only be maintained for one or two months, says Guo. His team is working on extending this.

“If we want to harness the computation power of organoids for AI computing, we really need to address those limitations,” he says.

Source: AI made from living human brain cells performs speech recognition | New Scientist

Yes, this article bangs on about limitations, but it’s pretty bizarre science this, using a brain to do AI

AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI

IBM and Meta Launch the AI Alliance in collaboration with over 50 Founding Members and Collaborators globally including AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, Yale University and others

[…]

While there are many individual companies, start-ups, researchers, governments, and others who are committed to open science and open technologies and want to participate in the new wave of AI innovation, more collaboration and information sharing will help the community innovate faster and more inclusively, and identify specific risks and mitigate those risks before putting a product into the world.

[..]

We are:

  • The creators of the tooling driving AI benchmarking, trust and validation metrics and best practices, and application creation such as MLPerf, Hugging Face, LangChain, LlamaIndex, and open-source AI toolkits for explainability

, privacy, adversarial robustness, and fairness evaluation

  • .
  • The universities and science agencies that educate and support generation after generation of AI scientists and engineers and push the frontiers of AI research through open science.
  • The builders of the hardware and infrastructure that supports AI training and applications – from the needed GPUs to custom AI accelerators and cloud platforms;
  • The champions of frameworks that drive platform software including PyTorch, Transformers, Diffusers, Kubernetes, Ray, Hugging Face Text generation inference      and Parameter Efficient Fine Tuning.
  • The creators of some of today’s most used open models including Llama2, Stable Diffusion, StarCoder, Bloom, and many others.

[…]

To learn more about the Alliance, visit here: https://thealliance.ai

[…]

Source: AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI

We will see – I don’t see any project pages on this quite yet. But this looks like a reasonable idea.

AI can tell which chateau Bordeaux wines come from with 100% accuracy

Alexandre Pouget at the University of Geneva, Switzerland, and his colleagues used machine learning to analyse the chemical composition of 80 red wines from 12 years between 1990 and 2007. All the wines came from seven wine estates in the Bordeaux region of France.

“We were interested in finding out whether there is a chemical signature that is specific to each of those chateaux that’s independent of vintage,” says Pouget, meaning one estate’s wines would have a very similar chemical profile, and therefore taste, year after year.

To do this, Pouget and his colleagues used a machine to vaporise each wine and separate it into its chemical components. This technique gave them a readout for each wine, called a chromatogram, with about 30,000 points representing different chemical compounds.

The researchers used 73 of the chromatograms to train a machine learning algorithm, along with data on the chateaux of origin and the year. Then they tested the algorithm on the seven chromatograms that had been held back.

They repeated the process 50 times, changing the wines used each time. The algorithm correctly guessed the chateau of origin 100 per cent of the time. “Not that many people in the world will be able to do this,” says Pouget. It was also about 50 per cent accurate at guessing the year when the wine was made.

The algorithm could even guess the estate when it was trained using just 5 per cent of each chromatogram, using portions where there are no notable peaks in chemicals visible to the naked eye, says Pouget.

This shows that a wine’s unique taste and feel in the mouth doesn’t depend on a handful of key molecules, but rather on the overall concentration of many, many molecules, says Pouget.

By plotting the chromatogram data, the algorithm could also separate the wines into groups that were more like each other. It grouped those on the right bank of the river Garonne – called Pomerol and St-Emilion wines – separately from those from left-bank estates, known as Medoc wines.

The work is further evidence that local geography, climate, microbes and wine-making practices, together known as the terroir, do give a unique flavour to a wine. Which precise chemicals are behind each wine wasn’t looked at in this study, however.

“It really is coming close to proof that the place of growing and making really does have a chemical signal for individual wines or chateaux,” says Barry Smith at the University of London’s School of Advanced Study. “The chemicals compounds and their similarities and differences reflect that elusive concept of terroir.”

 

Journal reference:

Communications Chemistry DOI: 10.1038/s42004-023-01051-9

Source: AI can tell which chateau Bordeaux wines come from with 100% accuracy | New Scientist

Brazillian city enacts an ordinance that was written by ChatGPT – might be first law entered by AI

City lawmakers in Brazil have enacted what appears to be the nation’s first legislation written entirely by artificial intelligence — even if they didn’t know it at the time.

The experimental ordinance was passed in October in the southern city of Porto Alegre and city councilman Ramiro Rosário revealed this week that it was written by a chatbot, sparking objections and raising questions about the role of artificial intelligence in public policy.

Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.

“If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.

“It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,” he added.

[…]

“We want work that is ChatGPT generated to be watermarked,” he said, adding that the use of artificial intelligence to help draft new laws is inevitable. “I’m in favor of people using ChatGPT to write bills as long as it’s clear.”

There was no such transparency for Rosário’s proposal in Porto Alegre. Sossmeier said Rosário did not inform fellow council members that ChatGPT had written the proposal.

Keeping the proposal’s origin secret was intentional. Rosário told the AP his objective was not just to resolve a local issue, but also to spark a debate. He said he entered a 49-word prompt into ChatGPT and it returned the full draft proposal within seconds, including justifications.

[…]

And the council president, who initially decried the method, already appears to have been swayed.

“I changed my mind,” Sossmeier said. “I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”

Source: Brazillian city enacts an ordinance that was written by ChatGPT | AP News

One AI image needs as much power as a smartphone charge

In a paper released on arXiv last week, a team of researchers from Hugging Face and Carnegie Mellon University calculated the amount of power AI systems use when asked to perform different tasks.

After asking AIs to perform 1,000 inferences for each task, the researchers found text-based AI tasks are more energy-efficient than jobs involving images.

Text generation consumed 0.042kWh while image generation required 1.35kWh. The boffins assert that charging a smartphone requires 0.012kWh – making image generation a very power-hungry application.

“The least efficient image generation model uses as much energy as 950 smartphone charges (11.49kWh), or nearly one charge per image generation,” the authors wrote, noting the “large variation between image generation models, depending on the size of image that they generate.”

The authors also measured carbon dioxide created by different AI workloads. As depicted in the graphic below, image creation topped that chart

screenshot_graph

Click to enlarge

You can read the full paper here [PDF].

Source: One AI image needs as much power as a smartphone charge • The Register

A(I) deal at any cost: Will the EU buckle to Big Tech?

Would you trust Elon Musk with your mortgage? Or Big Tech with your benefits?

Us neither.

That’s what’s at stake as the EU’s Artificial Intelligence Act reaches the final stage of negotiations. For all its big talk, it seems like the EU is buckling to Big Tech.

EU lawmakers have been tasked with developing the world’s first comprehensive law to regulate AI products. Now that AI systems are already being used in public life, lawmakers are rushing to catch up.

[…]

The principle of precaution urges us to exercise care and responsibility in the face of potential risks. It is crucial not only to foster innovation but also to prevent the unchecked expansion of AI from jeopardising justice and fundamental rights.

At the Left in the European Parliament, we called for this principle to be applied to the AI Act. Unfortunately, other political groups disagreed, prioritising the interests of Big Tech over those of the people. They settled on a three-tiered approach to risk whereby products are categorised into those that do not pose a significant risk, those that are high risk and those that are banned.

However, this approach contains a major loophole that risks undermining the entire legislation.

Like asking a tobacco company whether smoking is risky

When it was first proposed, the Commission outlined a list of ‘high-risk uses’ of AI, including AI systems used to select students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who can access welfare benefits.

Using AI in these assessments has significant real-life consequences. It can mean the difference between being accepted or rejected to university, being able to take out a loan or even being able to access welfare to pay bills, rent or put food on the table.

Under the three-tiered approach, AI developers are allowed to decide themselves whether their product is high-risk. The self-assessment loophole means the developers themselves get to determine whether their systems are high risk akin to a tobacco company deciding cigarettes are safe for our health, or a fossil fuel company saying its fumes don’t harm the environment.

[…]

Experience shows us that when corporations have this kind of freedom, they prioritise their profits over the interests of people and the planet. If the development of AI is to be accountable and transparent, negotiators must eliminate provisions on self-assessment.

AI gives us the opportunity to change our lives for the better. But as long as we let big corporations make the rules, we will continue to replicate inequalities that are already ravaging our societies.

Source: A(I) deal at any cost: Will the EU buckle to Big Tech? – EURACTIV.com

OK, so this seems to be a little breathless – surely we can put in a mechanism for EU checking of risk level when notified of a potential breech, including harsh penalties for misclassifying an AI?

However, the discussions around the EU AI Act – which had the potential to be one of the first and best pieces of regulation on the planet – has now descended into farce since ChatGPT and some strange idea that the original act did not have any provisions for General Purpose / Foundational AI models (it did – they were high risk models). The silly induced discussions this has provoked has only served to delay the AI act coming into force for over a year – something that big businesses are very very happy to see.

A new way to predict ship-killing rogue waves, more importantly: to see how an AI finds its results

[…]

In a paper in Proceedings of the National Academy of Sciences, a group of researchers led by Dion Häfner, a computer scientist at the University of Copenhagen, describe a clever way to make AI more understandable. They have managed to build a neural network, use it to solve a tricky problem, and then capture its insights in a relatively simple five-part equation that human scientists can use and understand.

The researchers were investigating “rogue waves”, those that are much bigger than expected given the sea conditions in which they form. Maritime lore is full of walls of water suddenly swallowing ships. But it took until 1995 for scientists to measure such a wave—a 26-metre monster, amid other waves averaging 12 metres—off the coast of Norway, proving these tales to be tall only in the literal sense.

[…]

To produce something a human could follow, the researchers restricted their neural network to around a dozen inputs, each based on ocean-wave maths that scientists had already worked out. Knowing the physical meaning of each input meant the researchers could trace their paths through the network, helping them work out what the computer was up to.

The researchers trained 24 neural networks, each combining the inputs in different ways. They then chose the one that was the most consistent at making accurate predictions in a variety of circumstances, which turned out to rely on only five of the dozen inputs.

To generate a human-comprehensible equation, the researchers used a method inspired by natural selection in biology. They told a separate algorithm to come up with a slew of different equations using those five variables, with the aim of matching the neural network’s output as closely as possible. The best equations were mixed and combined, and the process was repeated. The result, eventually, was an equation that was simple and almost as accurate as the neural network. Both predicted rogue waves better than existing models.

The first part of the equation rediscovered a bit of existing theory: it is an approximation of a well-known equation in wave dynamics. Other parts included some terms that the researchers suspected might be involved in rogue-wave formation but are not in standard models. There were some puzzlers, too: the final bit of the equation includes a term that is inversely proportional to how spread out the energy of the waves is. Current human theories include a second variable that the machine did not replicate. One explanation is that the network was not trained on a wide enough selection of examples. Another is that the machine is right, and the second variable is not actually necessary.

Better methods for predicting rogue waves are certainly useful: some can sink even the biggest ships. But the real prize is the visibility that Dr Häfner’s approach offers into what the neural network was doing. That could give scientists ideas for tweaking their own theories—and should make it easier to know whether to trust the computer’s predictions.

Source: A new way to predict ship-killing rogue waves

The AI startup behind Stable Diffusion is now testing generative video

Stable Diffusion’s generative art can now be animated, developer Stability AI announced. The company has released a new product called Stable Video Diffusion into a research preview, allowing users to create video from a single image. “This state-of-the-art generative AI video model represents a significant step in our journey toward creating models for everyone of every type,” the company wrote.

The new tool has been released in the form of two image-to-video models, each capable of generating 14 to 25 frames long at speeds between 3 and 30 frames per second at 576 × 1024 resolution.

[…]

Stable Video Diffusion is available only for research purposes at this point, not real-world or commercial applications. Potential users can sign up to get on a waitlist for access to an “upcoming web experience featuring a text-to-video interface,” Stability AI wrote. The tool will showcase potential applications in sectors including advertising, education, entertainment and more.

[…]

it has some limitations, the company wrote: it generates relatively short video (less than 4 seconds), lacks perfect photorealism, can’t do camera motion except slow pans, has no text control, can’t generate legible text and may not generate people and faces properly.

The tool was trained on a dataset of millions of videos and then fine-tuned on a smaller set, with Stability AI only saying that it used video that was publicly available for research purposes.

[…]

Source: The AI startup behind Stable Diffusion is now testing generative video

Sarah Silverman’s retarded AI Case Isn’t Going Very Well Either

Just a few weeks ago Judge William Orrick massively trimmed back the first big lawsuit that was filed against generative AI companies for training their works on copyright-covered materials. Most of the case was dismissed, and what bits remained may not last much longer. And now, it appears that Judge Vince Chhabria (who has been very good on past copyright cases) seems poised to do the same.

This is the high profile case brought by Sarah Silverman and some other authors, because some of the training materials used by OpenAI and Meta included their works. As we noted at the time, that doesn’t make it copyright infringing, and it appears the judge recognizes the large hill Silverman and the other authors have to climb here:

U.S. District Judge Vince Chhabria said at a hearing that he would grant Meta’s motion to dismiss the authors’ allegations that text generated by Llama infringes their copyrights. Chhabria also indicated that he would give the authors permission to amend most of their claims.

Meta has not yet challenged the authors’ central claim in the case that it violated their rights by using their books as part of the data used to train Llama.

“I understand your core theory,” Chhabria told attorneys for the authors. “Your remaining theories of liability I don’t understand even a little bit.”

Chhabria (who you may recall from the time he quashed the ridiculous copyright subpoena that tried to abuse copyright law to expose whoever exposed a billionaire’s mistress) seems rightly skeptical that just because ChatGPT can give you a summary of Silverman’s book that it’s somehow infringing:

“When I make a query of Llama, I’m not asking for a copy of Sarah Silverman’s book – I’m not even asking for an excerpt,” Chhabria said.

The authors also argued that Llama itself is an infringing work. Chhabria said the theory “would have to mean that if you put the Llama language model next to Sarah Silverman’s book, you would say they’re similar.”

“That makes my head explode when I try to understand that,” Chhabria said.

It’s good to see careful judges like Chhabria and Orrick getting into the details here. Of course, with so many of these lawsuits being filed, I’m still worried that some judge is going to make a mess of things, but we’ll see what happens.

Source: Sarah Silverman’s AI Case Isn’t Going Very Well Either | Techdirt

“Make It Real” AI prototype wows UI devs by turning drawings into working software

collaborative whiteboard app maker called “tldraw” made waves online by releasing a prototype of a feature called “Make it Real” that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI’s GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout.

“I think I need to go lie down,” posted designer Kevin Cannon at the start of a viral X thread that featured the creation of functioning sliders that rotate objects on screen, an interface for changing object colors, and a working game of tic-tac-toe. Soon, others followed with demonstrations of drawing a clone of Breakout, creating a working dial clock that ticks, drawing the snake game, making a Pong game, interpreting a visual state chart, and much more.

Users can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk. If others intercept your API key, they could use it to rack up a very large bill in your name (OpenAI charges by the amount of data moving into and out of its API). Those technically inclined can run the code locally, but it will still require OpenAI API access.

Tldraw, developed by Steve Ruiz in London, is an open source collaborative whiteboard tool. It offers a basic infinite canvas for drawing, text, and media without requiring a login. Launched in 2021, the project received $2.7 million in seed funding and is supported by GitHub sponsors. When The GPT-4V API launched recently, Ruiz integrated a design prototype called “draw-a-ui” created by Sawyer Hood to bring the AI-powered functionality into tldraw.

GPT-4V is a version of OpenAI’s large language model that can interpret visual images and use them as prompts.  As AI expert Simon Willison explains on X, Make it Real works by “generating a base64 encoded PNG of the drawn components, then passing that to GPT-4 Vision” with a system prompt and instructions to turn the image into a file using Tailwind. In fact, here is the full system prompt that tells GPT-4V how to handle the inputs and turn them into functioning code:

const systemPrompt = ‘You are an expert web developer who specializes in tailwind css.
A user will provide you with a low-fidelity wireframe of an application.
You will return a single html file that uses HTML, tailwind css, and JavaScript to create a high fidelity website.
Include any extra CSS and JavaScript in the html file.
If you have any images, load them from Unsplash or use solid colored rectangles.
The user will provide you with notes in blue or red text, arrows, or drawings.
The user may also include images of other websites as style references. Transfer the styles as best as you can, matching fonts / colors / layouts.
They may also provide you with the html of a previous design that they want you to iterate from.
Carry out any changes they request from you.
In the wireframe, the previous design’s html will appear as a white rectangle.
Use creative license to make the application more fleshed out.
Use JavaScript modules and unpkg to import any necessary dependencies.’

As more people experiment with GPT-4V and combine it with other frameworks, we’ll likely see more novel applications of OpenAI’s vision-parsing technology emerging in the weeks ahead. Also on Wednesday, a developer used the GPT-4V API to create a live, real-time narration of a video feed by a fake AI-generated David Attenborough voice, which we have covered separately.

For now, it feels like we’ve been given a preview of a possible future mode of software development—or interface design, at the very least—where creating a working prototype is as simple as making a visual mock-up and having an AI model do the rest.

Source: “Make It Real” AI prototype wows devs by turning drawings into working software | Ars Technica

AI weather forecaster complements traditional models very well

Global medium-range weather forecasting is critical to decision-making across many social and economic domains. Traditional numerical weather prediction uses increased compute resources to improve forecast accuracy, but does not directly use historical weather data to improve the underlying model. Here, we introduce “GraphCast,” a machine learning-based method trained directly from reanalysis data. It predicts hundreds of weather variables, over 10 days at 0.25° resolution globally, in under one minute. GraphCast significantly outperforms the most accurate operational deterministic systems on 90% of 1380 verification targets, and its forecasts support better severe event prediction, including tropical cyclones tracking, atmospheric rivers, and extreme temperatures. GraphCast is a key advance in accurate and efficient weather forecasting, and helps realize the promise of machine learning for modeling complex dynamical systems.

[…]

The dominant approach for weather forecasting today is “numerical weather prediction” (NWP), which involves solving the governing equations of weather using supercomputers.

[…]

NWP methods are improved by highly trained experts innovating better models, algorithms, and approximations, which can be a time-consuming and costly process.
Machine learning-based weather prediction (MLWP) offers an alternative to traditional NWP, where forecast models can be trained from historical data, including observations and analysis data.
[…]
In medium-range weather forecasting, i.e., predicting atmospheric variables up to 10 days ahead, NWP-based systems like the IFS are still most accurate. The top deterministic operational system in the world is ECMWF’s High RESolution forecast (HRES), a configuration of IFS which produces global 10-day forecasts at 0.1° latitude/longitude resolution, in around an hour
[…]
Here we introduce an MLWP approach for global medium-range weather forecasting called “GraphCast,” which produces an accurate 10-day forecast in under a minute on a single Google Cloud TPU v4 device, and supports applications including predicting tropical cyclone tracks, atmospheric rivers, and extreme temperatures.
[…]
A single weather state is represented by a 0.25° latitude/longitude grid
[…]
GraphCast is implemented as a neural network architecture, based on GNNs in an “encode-process-decode” configuration (13, 17), with a total of 36.7 million parameters (code, weights and demos can be found at https://github.com/deepmind/graphcast).
[…]
During model development, we used 39 years (1979–2017) of historical data from ECMWF’s ERA5 (21) reanalysis archive.
[…]
Of the 227 variable and level combinations predicted by GraphCast at each grid point, we evaluated its skill versus HRES on 69 of them, corresponding to the 13 levels of WeatherBench (8) and variables (23) from the ECMWF Scorecard (24)
[…]
We find that GraphCast has greater weather forecasting skill than HRES when evaluated on 10-day forecasts at a horizontal resolution of 0.25° for latitude/longitude and at 13 vertical levels.
[NOTE HRES has a resolution of 0.1°]
[…]
We also compared GraphCast’s performance to the top competing ML-based weather model, Pangu-Weather (16), and found GraphCast outperformed it on 99.2% of the 252 targets they presented (see supplementary materials section 6 for details).
[…]
GraphCast’s forecast skill and efficiency compared to HRES shows MLWP methods are now competitive with traditional weather forecasting methods
[…]
With 36.7 million parameters, GraphCast is a relatively small model by modern ML standards, chosen to keep the memory footprint tractable. And while HRES is released on 0.1° resolution, 137 levels, and up to 1 hour time steps, GraphCast operated on 0.25° latitude-longitude resolution, 37 vertical levels, and 6 hour time steps, because of the ERA5 training data’s native 0.25° resolution, and engineering challenges in fitting higher resolution data on hardware.
[…]
Our approach should not be regarded as a replacement for traditional weather forecasting methods, which have been developed for decades, rigorously tested in many real-world contexts, and offer many features we have not yet explored. Rather our work should be interpreted as evidence that MLWP is able to meet the challenges of real-world forecasting problems and has potential to complement and improve the current best methods.
[…]

Source: Learning skillful medium-range global weather forecasting | Science

Brave rivals Bing and ChatGPT with new privacy-focused AI chatbot

Brave, the privacy-focused browser that automatically blocks unwanted ads and trackers, is rolling out Leo — a native AI assistant that the company claims provides “unparalleled privacy” compared to some other AI chatbot services. Following several months of testing, Leo is now available to use for free by all Brave desktop users running version 1.60 of the web browser. Leo is rolling out “in phases over the next few days” and will be available on Android and iOS “in the coming months.”

The core features of Leo aren’t too dissimilar from other AI chatbots like Bing Chat and Google Bard: it can translate, answer questions, summarize webpages, and generate new content. Brave says the benefits of Leo over those offerings are that it aligns with the company’s focus on privacy — conversations with the chatbot are not recorded or used to train AI models, and no login information is required to use it. As with other AI chatbots, however, Brave claims Leo’s outputs should be “treated with care for potential inaccuracies or errors.”

[…]

Source: Brave rivals Bing and ChatGPT with new privacy-focused AI chatbot – The Verge

EU Parliament Fails To Understand That The Right To Read Is The Right To Train. Understands the copyright lobby has money though.

Walled Culture recently wrote about an unrealistic French legislative proposal that would require the listing of all the authors of material used for training generative AI systems. Unfortunately, the European Parliament has inserted a similarly impossible idea in its text for the upcoming Artificial Intelligence (AI) Act. The DisCo blog explains that MEPs added new copyright requirements to the Commission’s original proposal:

These requirements would oblige AI developers to disclose a summary of all copyrighted material used to train their AI systems. Burdensome and impractical are the right words to describe the proposed rules.

In some cases it would basically come down to providing a summary of half the internet.

Leaving aside the impossibly large volume of material that might need to be summarized, another issue is that it is by no means clear when something is under copyright, making compliance even more infeasible. In any case, as the DisCo post rightly points out, the EU Copyright Directive already provides a legal framework that addresses the issue of training AI systems:

The existing European copyright rules are very simple: developers can copy and analyse vast quantities of data from the internet, as long as the data is publicly available and rights holders do not object to this kind of use. So, rights holders already have the power to decide whether AI developers can use their content or not.

This is a classic case of the copyright industry always wanting more, no matter how much it gets. When the EU Copyright Directive was under discussion, many argued that an EU-wide copyright exception for text and data mining (TDM) and AI in the form of machine learning would be hugely beneficial for the economy and society. But as usual, the copyright world insisted on its right to double dip, and to be paid again if copyright materials were used for mining or machine learning, even if a license had already been obtained to access the material.

As I wrote in a column five years ago, that’s ridiculous, because the right to read is the right to mine. Updated for our AI world, that can be rephrased as “the right to read is the right to train”. By failing to recognize that, the European Parliament has sabotaged its own AI Act. Its amendment to the text will make it far harder for AI companies to thrive in the EU, which will inevitably encourage them to set up shop elsewhere.

If the final text of the AI Act still has this requirement to provide a summary of all copyright material that is used for training, I predict that the EU will become a backwater for AI. That would be a huge loss for the region, because generative AI is widely expected to be one of the most dynamic and important new tech sectors. If that happens, backward-looking copyright dogma will once again have throttled a promising digital future, just as it has done so often in the recent past.

Source: EU Parliament Fails To Understand That The Right To Read Is The Right To Train | Techdirt

Judge dismisses most of artists’ AI copyright lawsuits against Midjourney, Stability AI

judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies’ generative artificial intelligence systems.

U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. The judge said the artists could file an amended complaint against the two companies, whose systems utilize Stability’s Stable Diffusion text-to-image technology.

Orrick also dismissed McKernan and Ortiz’s copyright infringement claims entirely. The judge allowed Andersen to continue pursuing her key claim that Stability’s alleged use of her work to train Stable Diffusion infringed her copyrights.

The same allegation is at the heart of other lawsuits brought by artists, authors and other copyright owners against generative AI companies.

“Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture,” Orrick said.

The artists’ attorneys Joseph Saveri and Matthew Butterick said in a statement that their “core claim” survived, and that they were confident that they could address the court’s concerns about their other claims in an amended complaint to be filed next month.

A spokesperson for Stability declined to comment on the decision. Representatives for Midjourney and DeviantArt did not immediately respond to requests for comment.

The artists said in their January complaint that Stability used billions of images “scraped” from the internet, including theirs, without permission to teach Stable Diffusion to create its own images.

Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists’ copyrights. He allowed the claims to be amended but said he was “not convinced” that allegations based on the systems’ output could survive without showing that the images were substantially similar to the artists’ work.

The judge also dismissed other claims from the artists, including that the companies violated their publicity rights and competed with them unfairly, with permission to refile.

Orrick dismissed McKernan and Ortiz’s copyright claims because they had not registered their images with the U.S. Copyright Office, a requirement for bringing a copyright lawsuit.

The case is Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.

For the artists: Joseph Saveri of Joseph Saveri Law Firm; and Matthew Butterick

For Stability: Paul Schoenhard of Fried Frank Harris Shriver & Jacobson

For Midjourney: Angela Dunning of Cleary Gottlieb Steen & Hamilton

For DeviantArt: Andy Gass of Latham & Watkins

Read more:

Lawsuits accuse AI content creators of misusing copyrighted work

AI companies ask U.S. court to dismiss artists’ copyright lawsuit

US judge finds flaws in artists’ lawsuit against AI companies

Source: Judge pares down artists’ AI copyright lawsuit against Midjourney, Stability AI | Reuters

These suits are absolute nonsense. It’s like suing a person for having seen some art and made something a bit like it. It’s not very surprising that this has been wiped off the table.

AI Risks – doomsayers, warriors, reformers

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Source: AI Risks – Schneier on Security