Gas stoves emit benzene, linked to cancer, a new Stanford study shows

When the blue flame fires up on a gas stove, there’s more than heat coming off the burner. Researchers at Stanford University found that among the pollutants emitted from stoves is benzene, which is linked to cancer.

Levels of benzene can reach higher than those found in secondhand tobacco smoke and the benzene pollution can spread throughout a home, according to the research.

The findings add to a growing body of scientific evidence showing that emissions within the home are more harmful than gas stove owners have been led to believe

[…]

The risks of benzene have long been known. The Centers for Disease Control and Prevention says the chemical is linked to leukemia and other blood cell cancers.

“Benzene forms in flames and other high-temperature environments, such as the flares found in oil fields and refineries. We now know that benzene also forms in the flames of gas stoves in our homes,” said Rob Jackson in a statement. He’s the study’s senior author and a Stanford professor of earth sciences.

With one burner on high or the oven at 350 degrees, the researchers found benzene levels in a house can be worse than average levels for second-hand tobacco smoke. And they found the toxin doesn’t just stay in the kitchen, it can migrate to other places, such as bedrooms.

“Good ventilation helps reduce pollutant concentrations, but we found that exhaust fans were often ineffective at eliminating benzene exposure,” Jackson said. He says this is the first paper to analyze benzene emissions when a stove or oven is in use.

Researchers also tested whether cooking food – pan-frying salmon or bacon – emits benzene but found all the pollution came from the gas and not the food.

[…]

The American Gas Association, which represents natural gas utilities, routinely casts doubt over scientific research showing that burning natural gas in homes can be unhealthy. Last year the powerful trade group criticized a peer-reviewed study showing gas stoves leak benzene even when they are turned off. The AGA offered similar criticism of a 2022 analysis, which showed 12.7% of childhood asthma cases in the U.S. can be attributed to gas stove use in homes.

[…]

Medical experts are starting to take stands against cooking with gas. Nitrogen dioxide emissions have been the biggest concern, because they can trigger respiratory diseases, like asthma. The American Public Health Association has labeled gas cooking stoves “a public health concern,” and the American Medical Association warns that cooking with gas increases the risk of childhood asthma.

[…]

 

Source: Gas stoves emit benzene, linked to cancer, a new Stanford study shows : NPR

AIs are being fed with AI output by the people who are supposed to feed AI with original input

Workers hired via crowdsource services like Amazon Mechanical Turk are using large language models to complete their tasks – which could have negative knock-on effects on AI models in the future.

Data is critical to AI. Developers need clean, high-quality datasets to build machine learning systems that are accurate and reliable. Compiling valuable, top-notch data, however, can be tedious. Companies often turn to third party platforms such as Amazon Mechanical Turk to instruct pools of cheap workers to perform repetitive tasks – such as labeling objects, describing situations, transcribing passages, and annotating text.

Their output can be cleaned up and fed into a model to train it to reproduce that work on a much larger, automated scale.

AI models are thus built on the backs of human labor: people toiling away, providing mountains of training examples for AI systems that corporations can use to make billions of dollars.

But an experiment conducted by researchers at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland has concluded that these crowdsourced workers are using AI systems – such as OpenAI’s chatbot ChatGPT – to perform odd jobs online.

Training a model on its own output is not recommended. We could see AI models being trained on data generated not by people, but by other AI models – perhaps even the same models. That could lead to disastrous output quality, more bias, and other unwanted effects.

The experiment

The academics recruited 44 Mechanical Turk serfs to summarize the abstracts of 16 medical research papers, and estimated that 33 to 46 percent of passages of text submitted by the workers were generated using large language models. Crowd workers are often paid low wages – using AI to automatically generate responses allows them to work faster and take on more jobs to increase pay.

The Swiss team trained a classifier to predict whether submissions from the Turkers were human- or AI-generated. The academics also logged their workers’ keystrokes to detect whether the serfs copied and pasted text onto the platform, or typed in their entries themselves. There’s always the chance that someone uses a chatbot and then manually types in the output – but that’s unlikely, we suppose.

“We developed a very specific methodology that worked very well for detecting synthetic text in our scenario,” Manoel Ribeiro, co-author of the study and a PhD student at EPFL, told The Register this week.

[…]

Large language models will get worse if they are increasingly trained on fake content generated by AI collected from crowdsource platforms, the researchers argued. Outfits like OpenAI keep exactly how they train their latest models a close secret, and may not heavily rely on things like Mechanical Turk, if at all. That said, plenty of other models may rely on human workers, which may in turn use bots to generate training data, which is a problem.

Mechanical Turk, for one, is marketed as a provider of “data labeling solutions to power machine learning models.”

[…]

As AI continues to improve, it’s likely that crowdsourced work will change. Riberio speculated that large language models could replace some workers at specific tasks. “However, paradoxically, human data may be more precious than ever and thus it may be that these platforms will be able to implement ways to prevent large language model usage and ensure it remains a source of human data.”

Who knows – maybe humans might even end up collaborating with large language models to generate responses too, he added.

Source: Today’s AI is artificial artificial artificial intelligence • The Register

It’s like a photocopy of a photocopy of a photocopy…

Meta’s Voicebox AI does text-to-speech without huge training data per voice

Meta has unveiled Voicebox, its generative text-to-speech model that promises to do for the spoken word what ChatGPT and Dall-E, respectfully, did for text and image generation.

Essentially, its a text-to-output generator just like GPT or Dall-E — just instead of creating prose or pretty pictures, it spits out audio clips. Meta defines the system as “a non-autoregressive flow-matching model trained to infill speech, given audio context and text.” It’s been trained on more than 50,000 hours of unfiltered audio. Specifically, Meta used recorded speech and transcripts from a bunch of public domain audiobooks written in English, French, Spanish, German, Polish, and Portuguese.

That diverse data set allows the system to generate more conversational sounding speech, regardless of the languages spoken by each party, according to the researchers. “Our results show that speech recognition models trained on Voicebox-generated synthetic speech perform almost as well as models trained on real speech.” What’s more the computer generated speech performed with just a 1 percent error rate degradation, compared to the 45 to 70 percent drop-off seen with existing TTS models.

The system was first taught to predict speech segments based on the segments around them as well as the passage’s transcript. “Having learned to infill speech from context, the model can then apply this across speech generation tasks, including generating portions in the middle of an audio recording without having to recreate the entire input,” the Meta researchers explained.

[…]

Text-to-Speech generators haver been around for a minute — they’re how your parents’ TomToms were able to give dodgy driving directions in Morgan Freeman’s voice. Modern iterations like Speechify or Elevenlab’s Prime Voice AI are far more capable but they still largely require mountains of source material in order to properly mimic their subject — and then another mountain of different data for every. single. other. subject you want it trained on.

Voicebox doesn’t, thanks to a novel new zero-shot text-to-speech training method Meta calls Flow Matching. The benchmark results aren’t even close as Meta’s AI reportedly outperformed the current state of the art both in intelligibility (a 1.9 percent word error rate vs 5.9 percent) and “audio similarity” (a composite score of 0.681 to the SOA’s 0.580), all while operating as much as 20 times faster that today’s best TTS systems.

[…]

the company released a series of audio examples (see above/below) as well as a the program’s initial research paper. In the future, the research team hopes the technology will find its way into prosthetics for patients with vocal cord damage, in-game NPCs and digital assistants.

Source: Meta’s Voicebox AI is a Dall-E for text-to-speech | Engadget

Ransomware gang lists first victims of MOVEit mass-hacks, including US banks and universities, federal and state govt, huge companies, more more more

lop, the ransomware gang responsible for exploiting a critical security vulnerability in a popular corporate file transfer tool, has begun listing victims of the mass-hacks, including a number of U.S. banks and universities.

The Russia-linked ransomware gang has been exploiting the security flaw in MOVEit Transfer, a tool used by corporations and enterprises to share large files over the internet, since late May. Progress Software, which develops the MOVEit software, patched the vulnerability — but not before hackers compromised a number of its customers.

While the exact number of victims remains unknown, Clop on Wednesday listed the first batch of organizations it says it hacked by exploiting the MOVEit flaw. The victim list, which was posted to Clop’s dark web leak site, includes U.S.-based financial services organizations 1st Source and First National Bankers Bank; Boston-based investment management firm Putnam Investments; the Netherlands-based Landal Greenparks; and the U.K.-based energy giant Shell.

GreenShield Canada, a non-profit benefits carrier that provides health and dental benefits, was listed on the leak site but has since been removed.

Other victims listed include financial software provider Datasite; educational non-profit National Student Clearinghouse; student health insurance provider United Healthcare Student Resources; American manufacturer Leggett & Platt; Swiss insurance company ÖKK; and the University System of Georgia (USG).

[…]

Clop, which like other ransomware gangs typically contacts its victims to demand a ransom payment to decrypt or delete their stolen files, took the unusual step of not contacting the organizations it had hacked. Instead, a blackmail message posted on its dark web leak site told victims to contact the gang prior to its June 14 deadline.

[…]

Multiple organizations have previously disclosed they were compromised as a result of the attacks, including the BBC, Aer Lingus and British Airways. These organizations were all affected because they rely on HR and payroll software supplier Zellis, which confirmed that its MOVEit system was compromised.

The Government of Nova Scotia, which uses MOVEit to share files across departments, also confirmed it was affected, and said in a statement that some citizens’ personal information may have been compromised. However, in a message on its leak site, Clop said, “if you are a government, city or police service… we erased all your data.”

[…]

Source: Ransomware gang lists first victims of MOVEit mass-hacks, including US banks and universities | TechCrunch

Also: US energy department and other agencies hit by hackers in MoveIt breach | Guardian

Also: Millions of Americans’ personal data exposed in global hack

This list is searchable here: MOVEit victim list Progress Software MOVEit Transfer global cyber incident