Water molecule discovery on ion layer contradicts textbook models

Textbook models will need to be re-drawn after a team of researchers found that water molecules at the surface of salt water are organised differently than previously thought.

Many important reactions related to climate and environmental processes take place where water molecules interface with air. For example, the evaporation of ocean water plays an important role in atmospheric chemistry and climate science. Understanding these reactions is crucial to efforts to mitigate the human effect on our planet.

The distribution of ions at the interface of air and water can affect atmospheric processes. However, a precise understanding of the microscopic reactions at these important interfaces has so far been intensely debated.

In a paper published today in the journal Nature Chemistry, researchers from the University of Cambridge and the Max Planck Institute for Polymer Research in Germany show that ions and water molecules at the surface of most salt-water solutions, known as electrolyte solutions, are organised in a completely different way than traditionally understood. This could lead to better atmospheric chemistry models and other applications.

[…]

The combined results showed that both positively charged ions, called cations, and negatively charged ions, called anions, are depleted from the water/air interface. The cations and anions of simple electrolytes orient water molecules in both up- and down-orientation. This is a reversal of textbook models, which teach that ions form an electrical double layer and orient water molecules in only one direction.

Co-first author Dr Yair Litman, from the Yusuf Hamied Department of Chemistry, said: “Our work demonstrates that the surface of simple electrolyte solutions has a different ion distribution than previously thought and that the ion-enriched subsurface determines how the interface is organised: at the very top there are a few layers of pure water, then an ion-rich layer, then finally the bulk salt solution.”

Co-first author Dr Kuo-Yang Chiang of the Max Planck Institute said: “This paper shows that combining high-level HD-VSFG with simulations is an invaluable tool that will contribute to the molecular-level understanding of liquid interfaces.”

Professor Mischa Bonn, who heads the Molecular Spectroscopy department of the Max Planck Institute, added: “These types of interfaces occur everywhere on the planet, so studying them not only helps our fundamental understanding but can also lead to better devices and technologies. We are applying these same methods to study solid/liquid interfaces, which could have potential applications in batteries and energy storage.”

Source: Water molecule discovery contradicts textbook models | ScienceDaily

Wind turbines are friendlier to birds than oil-and-gas drilling

[…] few have looked at the effects on wildlife at the population level. Enter Erik Katovich, an economist at the University of Geneva. Dr Katovich made use of the Christmas Bird Count, a citizen-science project run by the National Audubon Society, an American non-profit outfit. Volunteers count birds they spot over Christmas, and the society compiles the numbers. Its records stretch back over a century.

Dr Katovich assumed, reasonably, that if wind turbines harmed bird populations, then the numbers seen in the Christmas Bird Count would drop in places where new turbines had been built. He combined bird population and species maps with the locations and construction dates of all wind turbines in the United States, with the exceptions of Alaska and Hawaii, between 2000 and 2020. He found that building turbines had no discernible effect on bird populations. That reassuring finding held even when he looked specifically at large birds like hawks, vultures and eagles that many people believe are particularly vulnerable to being struck.

But Dr Katovich did not confine his analysis to wind power alone. He also examined oil-and-gas extraction.

[…]

Comparing bird populations to the locations of new gas wells revealed an average 15% drop in bird numbers when new wells were drilled, probably due to a combination of noise, air pollution and the disturbance of rivers and ponds that many birds rely upon. When drilling happens in places designated by the National Audubon Society as “important bird areas”, bird numbers instead dropped by 25%. Such places are typically migration hubs, feeding grounds or breeding locations.

Wind power, in other words, not only produces far less planet-heating carbon dioxide and methane than do fossil fuels. It appears to be significantly less damaging to wildlife, too. Yet that is not the impression you would get from reading the news. Dr Katovich found 173 stories in major American news outlets reporting the supposed negative effects that wind turbines have on birds in 2020, compared with only 46 stories discussing the effects of oil-and-gas wells. Wind turbines might look dramatic. But their effect on birds is not.

Source: Wind turbines are friendlier to birds than oil-and-gas drilling

Study of wide binary stars reveals new evidence for modified gravity at low acceleration

A new study published in The Astrophysical Journal reveals new evidence for standard gravity breaking down in an idiosyncratic manner at low acceleration. This new study reinforces the evidence for modified gravity that was previously reported in 2023 from an analysis of the orbital motions of gravitationally bound, widely separated (or long-period) binary stars, known as wide binaries.

The new study was carried out by Kyu-Hyun Chae, a professor of physics and astronomy at Sejong University in Seoul, South Korea, with wide binaries observed by European Space Agency’s Gaia space telescope.

Gravitational anomalies reported in 2023 by Chae’s study of wide binaries have the unique feature that orbital motions in binaries experience larger accelerations than Newtonian predictions when the mutual gravitational acceleration is weaker than about 1 nanometer per second squared and the acceleration boost factor becomes about 1.4 at accelerations lower than about 0.1 nanometer per second squared.

This elevated acceleration in wide binaries cannot be explained by invoking the undetected dark matter because the required dark matter density is out of the question based on galactic dynamics and cosmological observations.

Remarkably, the elevated acceleration agrees well with what MOND (modified Newtonian dynamics)-type modified gravity theories such as AQUAL predict under the external field effect of the Milky Way. The MOND paradigm was suggested by physicist Mordehai Milgrom and the AQUAL theory was formulated by him and the late physicist Jacob Bekenstein 40 years ago.

Because gravitationally-bound astrophysical systems such as galaxies and galaxy clusters and the universe itself are governed by gravity, the breakdown of standard gravity at low acceleration has profound implications for astrophysics and cosmology.

[…]

Chae conservatively selected up to 2,463 pure binaries, which are less than 10% of the sample used in the earlier study. Since the expected fraction of pure binaries among apparently binary systems is at least 50%, this much lower fraction means that the selection was sufficiently strict.

Chae applied two algorithms to test gravity from the sample of pure binaries. In one algorithm that was originally developed from the earlier work for general or “impure” samples, he used a Monte Carlo method to calculate (the of) the observed kinematic acceleration, defined by relative velocity squared over the in the real three-dimensional space, as a function of the Newtonian gravitational acceleration between the two stars and then compared it with the corresponding Newtonian prediction of the kinematic acceleration.

In the other algorithm that is simpler and suitable for pure binaries, Chae compared the observed distribution of the sky-projected relative velocities between the two stars with respect to the sky-projected separations with the Newton-predicted distribution through a Monte Carlo method.

Both algorithms produce consistent results that agree well with the gravitational anomaly reported earlier.

[…]

However, the observed acceleration or relative velocity starts to deviate from the Newtonian prediction at a separation of about 2,000 au (astronomical units) and acceleration of about 1 nanometer per second squared. Then, there is a nearly constant boost of about 40 to 50% in acceleration or 20% boost in relative velocity at separation greater than about 5,000 au or acceleration lower than about 0.1 nanometer per second squared, up to the probed limit of about 20,000 au or 0.01 nanometer per second squared.

Chae’s new results agree well with an independent result by Xavier Hernandez’s group that is coincidentally in the production stage at present. This is significant because Hernandez’s group selected their sample completely independent of Chae’s selection and they used an independent algorithm (different from Chae’s two algorithms) based on the full distribution of relative velocities for their pure wide binary pairs.

[…]

Chae also points out that this new sample is explicitly free from any concerns of data quality cuts that have been raised in the literature so far. Chae further clarifies the recent contradicting claim by Indranil Banik and co-authors, saying, “Their methodology and results have a lot of problems. Their conclusion is invalid for two main reasons among others.”

“In their sample selection they knowingly excluded Newtonian-regime binaries that are crucial in accurately calibrating the occurrence rate of systems containing hidden additional component(s). Then, they employed a specific statistical algorithm of modeling velocities to infer gravity, the occurrence rate, and other parameters simultaneously, but ignored velocity errors though vital for their .”

Chae concludes, “At least three independent quantitative analyses by two independent groups reveal essentially the same gravitational anomaly. The gravitational anomaly is real, and a new scientific paradigm shift is on its way.”

The observed gravitational anomaly is remarkably well consistent with the MOND-type (Milgromian) gravity phenomenology. However, underlying theoretical possibilities encompassing the MOND-type gravity phenomenology are open at present, and this may be welcome news to theoretical physicists and mathematicians.

[…]

More information: Kyu-Hyun Chae, Robust Evidence for the Breakdown of Standard Gravity at Low Acceleration from Statistically Pure Binaries Free of Hidden Companions, The Astrophysical Journal (2024). DOI: 10.3847/1538-4357/ad0ed5

Journal information: Astrophysical Journal

Source: Study of wide binary stars reveals new evidence for modified gravity at low acceleration

Biophotons: Are lentils communicating using quantum light messages?

[…]

Curceanu hopes the apparatus and methods of nuclear physics can solve the century-old mystery of why lentils – and other organisms too – constantly emit an extremely weak dribble of photons, or particles of light. Some reckon these “biophotons” are of no consequence. Others insist they are a subtle form of lentil communication. Curceanu leans towards the latter camp – and she has a hunch that the pulses between the pulses might even contain secret quantum signals. “These are only the first steps, but it looks extremely interesting,” she says.

There are already hints that living things make use of quantum phenomena, with inconclusive evidence that they feature in photosynthesis and the way birds navigate, among other things. But lentils, not known for their complex behaviour, would be the most startling example yet of quantum biology, says Michal Cifra at the Czech Academy of Sciences in Prague. “It would be amazing,” says Cifra. “If it’s true.” Since so many organisms emit biophotons, such a discovery might indicate that quantum effects are ubiquitous in nature.

Biophotons

Biophotons have had scientists stumped for precisely a century. In 1923, biologist Alexander Gurwitsch was studying how plant cells divide by placing onion roots near each other. The closer the roots were, the more cell division occurred, suggesting there was some signal alerting the roots to their neighbour’s presence.

[…]

To tease out how the onion roots were signalling, Gurwitsch repeated the experiment with all manner of physical barriers between the roots. Wood, metal, glass and even gelatine dampened cell division to the same level seen in single onion roots. But, to Gurwitsch’s surprise, a quartz divider had no effect. Compared to glass, quartz allows far more ultraviolet rays to pass through. Some kind of weak emission of UV radiation, he concluded, must be responsible.

[…]

Living organisms have long been known to communicate using light. Jellyfish, mushrooms and fireflies, to name just a few, glow or emit bright flashes to ward off enemies or attract a mate. But these obvious signals, known as bioluminescence, are different to the effect Gurwitsch had unearthed. Biophotons are “a very low-intensity light, not visible to the naked eye”, says Curceanu’s collaborator Maurizio Benfatto. In fact, biophotons were so weak that it took until 1954 to develop equipment sensitive enough to decisively confirm Gurwitsch’s idea.

Since then, dozens of research groups have reported cases of biophoton emission having a useful function in plants and even animals. Like onion roots, yeast cells are known to influence the growth rate of their neighbours. And in 2022, Zsolt PÓnya and Katalin Somfalvi-TÓth at the University of Kaposvár in Hungary observed biophotons being emitted by sunflowers when they were put under stress, which the researchers hoped to use to precisely monitor these crops. Elsewhere, a review carried out by Roeland Van Wijk and Eduard Van Wijk, now at the research company MELUNA in the Netherlands, suggested that biophotons may play a role in various human health conditions, from ageing to acne.

There is a simple explanation for how biophotons are created, too. During normal metabolism, chemical reactions in cells end up converting biomolecules to what researchers called an excited state, where electrons are elevated to higher energy levels. Those electrons then naturally drop to their ground state and emit a photon in the process. Because germinating seeds, like lentils, burn energy quickly to grow, they emit more biophotons.

Today, no one doubts that biophotons exist. Rather, the dispute is over whether lentils and other organisms have harnessed biophotons in a useful way.

[…]

We know that plants communicate using chemicals and sometimes even emit ultrasonic squeaks when stressed. This allows them to control their growth, warn each other about invading insects and attract pollinators. We also know they have ways of detecting and responding to photons in the form of regular sunlight. “Biological systems can detect photons and have feedback loops based on that,”

[…]

Curceanu and Benfatto are hoping that the application of serious physics equipment to this problem could finally let us eavesdrop on the legume’s secrets. They typically use supersensitive detectors to probe the foundations of reality. Now, they are applying these to a box of 75 lentil seeds – they need that many because if they used any fewer, the biophoton signals would be too weak.

[…]

Years ago, Benfatto came across a paper on biophotons and noticed there appeared to be patterns in the way they were produced. The intensity would swell, then fall away, almost like music. This gave him the idea of applying a method from physics called diffusion entropy analysis to investigate these patterns. The method provides a means of characterising the mathematical structures that underlie complex patterns. Imagine comparing a simple drumbeat with the melody of a pop song, for example – the method Benfatto wanted to apply could quantify the complexity embodied in each.

To apply this to the lentils, Benfatto, Curceanu and their colleagues put their seeds in a black box that shielded them from interference. Outside the box, they mounted an instrument capable of detecting single biophotons. They also had rotating filters that allowed them to detect photons with different wavelengths. All that remained was to set the lentils growing. “We add water and then we wait,” says Benfatto.

In 2021, they unveiled their initial findings. It turned out that the biophotons’ signals changed significantly during the lentils’ germination. During the first phase, the photons were emitted in a pattern that repeatedly reset, like a piece of music changing tempo. Then, during the second phase, the emissions took the form of another kind of complex pattern called fractional Brownian motion.

 

Photograph provided by Catalina Oana Curceanu Catalina.Curceanu@lnf.infn.it showing the experimental setup used for the research paper: Biophotons and Emergence of Quantum Coherence--A Diffusion Entropy Analysis

Are these germinating lentils communicating in quantum code?

Catalina Curceanu

 

The fact that the lentils’ biophoton emissions aren’t random is an indication that they could be communicating, says Benfatto. And that’s not all. Tantalisingly, the complexity in the second phase of the emissions is mathematically related to the equations of quantum mechanics. For this reason, Benfatto says his team’s work hints that signals displaying quantum coherence could have a role in directing lentil germination.

[…]

Part of the problem with designing experiments like these is that we don’t really know what quantum mechanical effects in living organisms look like. Any quantum effects discovered in lentils and other organisms would be “very different to textbook quantum mechanics”, says Scholes.

[…]

so far, the evidence for quantum lentils is sketchy. Still, he is pushing ahead with a new experimental design that makes the signal-to-noise ratio 100 times better. If you want to earwig on the clandestine whispers of these seeds, it might just help to get rid of their noisy neighbours, which is why he will study one germinating lentil at a time.

Source: Biophotons: Are lentils sending secret quantum messages? | New Scientist

Clarified at last: The physics of popping champagne

It sounds like a simple, well-known everyday phenomenon: there is high in a champagne , the stopper is driven outwards by the compressed gas in the bottle and flies away with a powerful pop. But the physics behind this is complicated.

[…]

Using complex computer simulations, it was possible to recalculate the behavior of the stopper and the .

In the process, astonishing phenomena were discovered: a supersonic shock wave is formed and the gas flow can reach more than one and a half times the speed of sound. The results, which appear on the pre-print server arXiv,

[…]

“The champagne cork itself flies away at a comparatively low speed, reaching perhaps 20 meters per second,”

[…]

“However, the gas that flows out of the bottle is much faster,” says Wagner. “It overtakes the cork, flows past it and reaches speeds of up to 400 meters per second.”

That is faster than the speed of sound. The gas jet therefore breaks the shortly after the bottle is opened—and this is accompanied by a shock wave.

[…]

“Then there are jumps in these variables, so-called discontinuities,” says Bernhard Scheichl (TU Vienna & AC2T), Lukas Wagner’s dissertation supervisor. “Then the pressure or velocity in front of the shock wave have a completely different value than just behind it.”

This point in the gas jet, where the pressure changes abruptly, is also known as the “Mach disk.” “Very similar phenomena are also known from or rockets, where the exhaust jet exits the engines at high speed,”

[…]

The Mach disk first forms between the bottle and the cork and then moves back towards the bottle opening.

Temporarily colder than the North Pole

Not only the gas pressure, but also the temperature changes abruptly: “When gas expands, it becomes cooler, as we know from spray cans,” explains Lukas Wagner. This effect is very pronounced in the champagne bottle: the gas can cool down to -130°C at certain points. It can even happen that tiny dry ice crystals are formed from the CO2 that makes the sparkling wine bubble.

“This effect depends on the original temperature of the sparkling wine,” says Lukas Wagner. “Different temperatures lead to dry ice crystals of different sizes, which then scatter light in different ways. This results in variously colored smoke. In principle, you can measure the temperature of the sparkling wine by just looking at the color of the smoke.”

[…]

The audible pop when the bottle is opened is a combination of different effects: Firstly, the cork expands abruptly as soon as it has left the bottle, creating a pressure wave, and secondly, you can hear the shock wave, generated by the supersonic gas jet—very similar to the well-known aeroacoustic phenomenon of the sonic boom.

[…]

More information: Lukas Wagner et al, Simulating the opening of a champagne bottle, arXiv (2023). DOI: 10.48550/arxiv.2312.12271

Source: Clarified at last: The physics of popping champagne

Research team discovers how to sabotage antibiotic-resistant ‘superbugs’

The typical strategy when treating microbial infections is to blast the pathogen with an , which works by getting inside the harmful cell and killing it. This is not as easy as it sounds, because any new antibiotic needs to be both water soluble, so that it can travel easily through the bloodstream, and oily, in order to cross the pathogenic cell’s first line of defense, the cellular membrane. Water and oil, of course, don’t mix, and it’s difficult to design a drug that has enough of both characteristics to be effective.

The difficulty doesn’t stop there, either, because pathogenic cells have developed something called an “efflux pump,” that can recognize antibiotics and then safely excrete them from the cell, where they can’t do any harm. If the antibiotic can’t overcome the efflux pump and kill the cell, then the pathogen “remembers” what that specific antibiotic looks like and develops additional efflux pumps to efficiently handle it—in effect, becoming resistant to that particular antibiotic.

One path forward is to find a new antibiotic, or combinations of them, and try to stay one step ahead of the superbugs.

“Or, we can shift our strategy,” says Alejandro Heuck, associate professor of biochemistry and molecular biology at UMass Amherst and the paper’s senior author.

[…]

Like the pathogenic cell, host cells also have thick, difficult-to-penetrate cell walls. In order to breach them, pathogens have developed a syringe-like machine that first secretes two proteins, known as PopD and PopB. Neither PopD nor PopB individually can breach the cell wall, but the two proteins together can create a “translocon”—the cellular equivalent of a tunnel through the cell membrane. Once the tunnel is established, the pathogenic cell can inject other proteins that do the work of infecting the host.

This entire process is called the Type 3 secretion system—and none of it works without both PopB and PopD. “If we don’t try to kill the pathogen,” says Heuck, “then there’s no chance for it to develop resistance. We’re just sabotaging its machine. The pathogen is still alive; it’s just ineffective, and the host has time to use its natural defenses to get rid of the pathogen.”

[..]

Heuck and his colleagues realized that an enzyme class called the luciferases—similar to the ones that cause lightning bugs to glow at night—could be used as a tracer. They split the enzyme into two halves. One half went into the PopD/PopB proteins, and the other half was engineered into a host cell.

These engineered proteins and hosts can be flooded with different chemical compounds. If the host cell suddenly lights up, that means that PopD/PopB successfully breached the cellular wall, reuniting the two halves of the luciferase, causing them to glow. But if the cells stay dark? “Then we know which molecules break the translocon,” says Heuck.

Heuck is quick to point out that his team’s research has not only obvious applications in the world of pharmaceuticals and public health, but that it also advances our understanding of exactly how microbes infect healthy cells. “We wanted to study how worked,” he says, “and then suddenly we discovered that our findings can help solve a public-health problem.”

This research is published in the journal ACS Infectious Diseases.

More information: Hanling Guo et al, Cell-Based Assay to Determine Type 3 Secretion System Translocon Assembly in Pseudomonas aeruginosa Using Split Luciferase, ACS Infectious Diseases (2023). DOI: 10.1021/acsinfecdis.3c00482

Source: Research team discovers how to sabotage antibiotic-resistant ‘superbugs’

Health misinformation is rampant on social media

This article was originally featured on The Conversation.

The global anti-vaccine movement and vaccine hesitancy that accelerated during the COVID-19 pandemic show no signs of abating.

According to a survey of U.S. adults, Americans in October 2023 were less likely to view approved vaccines as safe than they were in April 2021. As vaccine confidence falls, health misinformation continues to spread like wildfire on social media and in real life.

I am a public health expert in health misinformationscience communication and health behavior change.

In my view, we cannot underestimate the dangers of health misinformation and the need to understand why it spreads and what we can do about it. Health misinformation is defined as any health-related claim that is false based on current scientific consensus.

False claims about vaccines

Vaccines are the No. 1 topic of misleading health claims. Some common myths about vaccines include:

The costs of health misinformation

Beliefs in such myths have come at the highest cost.

An estimated 319,000 COVID-19 deaths that occurred between January 2021 and April 2022 in the U.S. could have been prevented if those individuals had been vaccinated, according to a data dashboard from the Brown University School of Public Health. Misinformation and disinformation about COVID-19 vaccines alone have cost the U.S. economy an estimated US$50 million to $300 million per day in direct costs from hospitalizations, long-term illness, lives lost and economic losses from missed work.

Though vaccine myths and misunderstandings tend to dominate conversations about health, there is an abundance of misinformation on social media surrounding diets and eating disorders, smoking or substance use, chronic diseases and medical treatments.

My team’s research and that of others show that social media platforms have become go-to sources for health information, especially among adolescents and young adults. However, many people are not equipped to maneuver the maze of health misinformation.

For example, an analysis of Instagram and TikTok posts from 2022 to 2023 by The Washington Post and the nonprofit news site The Examination found that the food, beverage and dietary supplement industries paid dozens of registered dietitian influencers to post content promoting diet soda, sugar and supplements, reaching millions of viewers. The dietitians’ relationships with the food industry were not always made clear to viewers.

Studies show that health misinformation spread on social media results in fewer people getting vaccinated and can also increase the risk of other health dangers such as disordered eating and unsafe sex practices and sexually transmitted infections. Health misinformation has even bled over into animal health, with a 2023 study finding that 53% of dog owners surveyed in a nationally representative sample report being skeptical of pet vaccines.

Health misinformation is on the rise

One major reason behind the spread of health misinformation is declining trust in science and government. Rising political polarization, coupled with historical medical mistrust among communities that have experienced and continue to experience unequal health care treatment, exacerbates preexisting divides.

The lack of trust is both fueled and reinforced by the way misinformation can spread today. Social media platforms allow people to form information silos with ease; you can curate your networks and your feed by unfollowing or muting contradictory views from your own and liking and sharing content that aligns with your existing beliefs and value systems.

By tailoring content based on past interactions, social media algorithms can unintentionally limit your exposure to diverse perspectives and generate a fragmented and incomplete understanding of information. Even more concerning, a study of misinformation spread on Twitter analyzing data from 2006 to 2017 found that falsehoods were 70% more likely to be shared than the truth and spread “further, faster, deeper and more broadly than the truth” across all categories of information.

How to combat misinformation

The lack of robust and standardized regulation of misinformation content on social media places the difficult task of discerning what is true or false information on individual users. We scientists and research entities can also do better in communicating our science and rebuilding trust, as my colleague and I have previously written. I also provide peer-reviewed recommendations for the important roles that parents/caregivers, policymakers and social media companies can play.

Below are some steps that consumers can take to identify and prevent health misinformation spread:

  • Check the source. Determine the credibility of the health information by checking if the source is a reputable organization or agency such as the World Health Organization, the National Institutes of Health or the Centers for Disease Control and Prevention. Other credible sources include an established medical or scientific institution or a peer-reviewed study in an academic journal. Be cautious of information that comes from unknown or biased sources.
  • Examine author credentials. Look for qualifications, expertise and relevant professional affiliations for the author or authors presenting the information. Be wary if author information is missing or difficult to verify.
  • Pay attention to the date. Scientific knowledge by design is meant to evolve as new evidence emerges. Outdated information may not be the most accurate. Look for recent data and updates that contextualize findings within the broader field.
  • Cross-reference to determine scientific consensus. Cross-reference information across multiple reliable sources. Strong consensus across experts and multiple scientific studies supports the validity of health information. If a health claim on social media contradicts widely accepted scientific consensus and stems from unknown or unreputable sources, it is likely unreliable.
  • Question sensational claims. Misleading health information often uses sensational language designed to provoke strong emotions to grab attention. Phrases like “miracle cure,” “secret remedy” or “guaranteed results” may signal exaggeration. Be alert for potential conflicts of interest and sponsored content.
  • Weigh scientific evidence over individual anecdotes. Prioritize information grounded in scientific studies that have undergone rigorous research methods, such as randomized controlled trials, peer review and validation. When done well with representative samples, the scientific process provides a reliable foundation for health recommendations compared to individual anecdotes. Though personal stories can be compelling, they should not be the sole basis for health decisions.
  • Talk with a health care professional. If health information is confusing or contradictory, seek guidance from trusted health care providers who can offer personalized advice based on their expertise and individual health needs.
  • When in doubt, don’t share. Sharing health claims without validity or verification contributes to misinformation spread and preventable harm.

All of us can play a part in responsibly consuming and sharing information so that the spread of the truth outpaces the false.

Monica Wang receives funding from the National Institutes of Health.

Source: Health misinformation is rampant on social media | Popular Science

Yoga nidra might be a path to better sleep and improved memory

Practicing yoga nidra — a kind of mindfulness training — might improve sleep, cognition, learning, and memory, even in novices, according to a pilot study publishing in the open-access journal PLOS ONE on December 13 by Karuna Datta of the Armed Forces Medical College in India, and colleagues. After a two-week intervention with a cohort of novice practitioners, the researchers found that the percentage of delta-waves in deep sleep increased and that all tested cognitive abilities improved.

Unlike more active forms of yoga, which focus on physical postures, breathing, and muscle control, yoga nidra guides people into a state of conscious relaxation while they are lying down. While it has reported to improve sleep and cognitive ability, those reports were based more on subjective measures than on objective data. The new study used objective polysomnographic measures of sleep and a battery of cognitive tests. Measurements were taken before and after two weeks of yoga nidra practice, which was carried out during the daytime using a 20 minute audio recording.

Among other things, polysomnography measures brain activity to determine how long each sleep stage lasts and how frequently each stage occurs. After two weeks of yoga nidra, the researchers observed that participants exhibited a significantly increased sleep efficiency and percentage of delta-waves in deep sleep. They also saw faster responses in all cognitive tests with no loss in accuracy and faster and more accurate responses in tasks including tests of working memory, abstraction, fear and anger recognition, and spatial learning and memory tasks. The findings support previous studies which link delta-wave sleep to improved sleep quality as well as better attention and memory.

[…]

Source: Yoga nidra might be a path to better sleep and improved memory | ScienceDaily

New way to charge batteries using indefinite causal order, comes with counterintuitive findings

Batteries that exploit quantum phenomena to gain, distribute and store power promise to surpass the abilities and usefulness of conventional chemical batteries in certain low-power applications. For the first time, researchers, including those from the University of Tokyo, take advantage of an unintuitive quantum process that disregards the conventional notion of causality to improve the performance of so-called quantum batteries, bringing this future technology a little closer to reality.

[…]

At present, quantum batteries only exist as laboratory experiments, and researchers around the world are working on the different aspects that are hoped to one day combine into a fully functioning and practical application. Graduate student Yuanbo Chen and Associate Professor Yoshihiko Hasegawa from the Department of Information and Communication Engineering at the University of Tokyo are investigating the best way to charge a quantum battery, and this is where time comes into play. One of the advantages of quantum batteries is that they should be incredibly efficient, but that hinges on the way they are charged.

While it’s still quite a bit bigger than the AA battery you might find around the home, the experimental apparatus acting as a quantum battery demonstrated charging characteristics that could one day improve upon the battery in your smartphone. Credit: Zhu et al, 2023

“Current batteries for low-power devices, such as smartphones or sensors, typically use chemicals such as lithium to store charge, whereas a quantum battery uses like arrays of atoms,” said Chen. “While chemical batteries are governed by classical laws of physics, microscopic particles are quantum in nature, so we have a chance to explore ways of using them that bend or even break our intuitive notions of what takes place at small scales. I’m particularly interested in the way quantum particles can work to violate one of our most fundamental experiences, that of time.”

[…]

the team instead used a novel quantum effect they call indefinite causal order, or ICO. In the classical realm, causality follows a clear path, meaning that if event A leads to event B, then the possibility of B causing A is excluded. However, at the quantum scale, ICO allows both directions of causality to exist in what’s known as a quantum superposition, where both can be simultaneously true.

Common intuition suggests that a more powerful charger results in a battery with a stronger charge. However, the discovery stemming from ICO introduces a remarkable reversal in this relationship; now, it becomes possible to charge a more energetic battery with significantly less power. Credit: Chen et al, 2023

“With ICO, we demonstrated that the way you charge a battery made up of quantum particles could drastically impact its performance,” said Chen. “We saw huge gains in both the energy stored in the system and the . And somewhat counterintuitively, we discovered the surprising effect of an interaction that’s the inverse of what you might expect: A lower-power charger could provide higher energies with greater efficiency than a comparably higher-power charger using the same apparatus.”

The phenomenon of ICO the team explored could find uses beyond charging a new generation of low-power devices. The underlying principles, including the inverse interaction effect uncovered here, could improve the performance of other tasks involving thermodynamics or processes that involve the transfer of heat. One promising example is solar panels, where heat effects can reduce their efficiency, but ICO could be used to mitigate those and lead to gains in efficiency instead.

More information: Charging Quantum Batteries via Indefinite Causal Order: Theory and Experiment, Physical Review Letters (2023). journals.aps.org/prl/accepted/ … 109d959f76f487564a34

Source: New way to charge batteries harnesses the power of ‘indefinite causal order’

Zooniverse – help explore space, the planet, medicine, science!

[…]

At the Zooniverse, anyone can be a researcherYou don’t need any specialised background, training, or expertise to participate in any Zooniverse projects. We make it easy for anyone to contribute to real academic research, on their own computer, at their own convenience.You’ll be able to study authentic objects of interest gathered by researchers, like images of faraway galaxies, historical records and diaries, or videos of animals in their natural habitats. By answering simple questions about them, you’ll help contribute to our understanding of our world, our history, our Universe, and more.With our wide-ranging and ever-expanding suite of projects, covering many disciplines and topics across the sciences and humanities, there’s a place for anyone and everyone to explore, learn and have fun in the Zooniverse. To volunteer with us, just go to the Projects page, choose one you like the look of, and get started.

[…]

Zooniverse projects are constructed with the aim of converting volunteers’ efforts into measurable results. These projects have produced a large number of published research papers, as well as several open-source sets of analyzed data. In some cases, Zooniverse volunteers have even made completely unexpected and scientifically significant discoveries.

A significant amount of this research takes place on the Zooniverse discussion boards, where volunteers can work together with each other and with the research teams. These boards are integrated with each project to allow for everything from quick hashtagging to in-depth collaborative analysis. There is also a central Zooniverse board for general chat and discussion about Zooniverse-wide matters.

Many of the most interesting discoveries from Zooniverse projects have come from discussion between volunteers and researchers. We encourage all users to join the conversation on the discussion boards for more in-depth participation.

Source: About — Zooniverse

Frostquakes are a thing now – being found in the North

A new study has identified a potentially growing natural hazard in the north: frostquakes. With climate change contributing to many observed changes in weather extremes, such as heavy precipitation and cold waves, these seismic events could become more common. Researchers were surprised by the role of wetlands and drainage channels in irrigated wetlands in origin of frostquakes.

Frostquakes are caused by the rapid freezing of water in the ground. They are most common during extreme winter conditions, when wet, snow-free ground freezes rapidly. They have been reported in northern Finland in 2016, 2019 and 2022, as well as in Chicago in 2019 and Ottawa in 2022, among others.

Roads and other areas cleared of snow in winter are particularly vulnerable to frostquakes.

[.,..]

We found that during the winter of 2022–2023 the main sources of frostquakes in Oulu, Finland were actually swamps, wetlands and areas with high water tables or other places where water accumulates,” says Elena Kozlovskaya, Professor of applied geophysics at the University of Oulu Mining School.

When water in the ground, accumulated during heavy rainfalls in autumn or melting of snow during warm winter weather, freezes and expands rapidly, it causes cracks in the ground, accompanied by tremors and booms. When occurred in populated areas, frostquakes, or cryoseisms, are felt by people and they can be accompanied by specific noises. Ground motions during frostquakes are comparable to those of other seismic events, such as more distant earthquakes, mining explosions and vibrations produced by freight trains. Frostquakes are also known phenomenon in permafrost regions.

The new study, currently available as a preprint and set to be published in the journal EGUsphere, is the first applied study of seismic events from marsh and . Researchers from the University of Oulu, Finland and the Geological Survey of Finland (GTK) showed that fracturing in the uppermost frozen ground can be initiated if the thickness of frozen layer is about 5 cm and larger. Ruptures can propagate deeper and damage infrastructure such as buildings, basements, pipelines and roads.

“With , rapid changes in have brought frostquakes to the attention of the wider audience, and they may become more common. Although their intensity is usually low, a series of relatively strong frostquakes in Oulu, 2016, which ruptured roads, was the starting point for our research.

[…]

During several days when the air temperature was decreasing rapidly, the reported ground tremors and unusual sounds to the researchers. These observations were used to identify frostquakes from seismic data. The conditions for a frostquake are favorable when the temperature drops to more than—20°C at a rate of about one degree per hour.

There are many wetlands close to seismic stations in Oulu near residential area where the main sources of frostquakes were detected. In Sodankylä, the frostquakes were in addition caused by ice fracturing in the Kitinen river. “Frostquakes have often occurred in January, but other times are also possible,” says Moisio.

During frost quakes, seismic surface waves produce high ground accelerations at distances of up to hundreds of meters. “The fractures during frostquakes seem to propagate along drainage channels near roads and in irrigated wetlands” Kozlovskaya says.

Irrigated wetlands and drainage channels are also abundant around residential areas.

[…]

Further studies will help to identify areas at risk of frostquakes, which will help to prepare and protect the built environment from this specific natural hazard. Researchers at the University of Oulu and GTK aim to create a system that could predict frostquakes based on soil analysis and satellite data.

More information: Nikita Afonin et al, Frost quakes in wetlands in northern Finland during extreme winter weather conditions and related hazard to urban infrastructure (2023). DOI: 10.5194/egusphere-2023-1853

Source: Frostquakes: A new earthquake risk in the north?

AI made from living human brain cells performs speech recognition

Balls of human brain cells linked to a computer have been used to perform a very basic form of speech recognition. The hope is that such systems will use far less energy for AI tasks than silicon chips.

“This is just proof-of-concept to show we can do the job,” says Feng Guo at Indiana University Bloomington. “We do have a long way to go.”

Brain organoids are lumps of nerve cells that form when stem cells are grown in certain conditions. “They are like mini-brains,” says Guo.

It takes two or three months to grow the organoids, which are a few millimetres wide and consist of as many as 100 million nerve cells, he says. Human brains contain around 100 billion nerve cells.

The organoids are then placed on top of a microelectrode array, which is used both to send electrical signals to the organoid and to detect when nerve cells fire in response. The team calls its system “Brainoware”.

New Scientist reported in March that Guo’s team had used this system to try to solve equations known as a Hénon map.

For the speech recognition task, the organoids had to learn to recognise the voice of one individual from a set of 240 audio clips of eight people pronouncing Japanese vowel sounds. The clips were sent to the organoids as sequences of signals arranged in spatial patterns.

The organoids’ initial responses had an accuracy of around 30 to 40 per cent, says Guo. After training sessions over two days, their accuracy rose to 70 to 80 per cent.

“We call this adaptive learning,” he says. If the organoids were exposed to a drug that stopped new connections forming between nerve cells, there was no improvement.

The training simply involved repeating the audio clips, and no form of feedback was provided to tell the organoids if they were right or wrong, says Guo. This is what is known in AI research as unsupervised learning.

There are two big challenges with conventional AI, says Guo. One is its high energy consumption. The other is the inherent limitations of silicon chips, such as their separation of information and processing.

Guo’s team is one of several groups exploring whether biocomputing using living nerve cells can help overcome these challenges. For instance, a company called Cortical Labs in Australia has been teaching brain cells how to play Pong, New Scientist revealed in 2021.

Titouan Parcollet at the University of Cambridge, who works on conventional speech recognition, doesn’t rule out a role for biocomputing in the long run.

“However, it might also be a mistake to think that we need something like the brain to achieve what deep learning is currently doing,” says Parcollet. “Current deep-learning models are actually much better than any brain on specific and targeted tasks.”

Guo and his team’s task is so simplified that it is only identifies who is speaking, not what the speech is, he says. “The results aren’t really promising from the speech recognition perspective.”

Even if the performance of Brainoware can be improved, another major issue with it is that the organoids can only be maintained for one or two months, says Guo. His team is working on extending this.

“If we want to harness the computation power of organoids for AI computing, we really need to address those limitations,” he says.

Source: AI made from living human brain cells performs speech recognition | New Scientist

Yes, this article bangs on about limitations, but it’s pretty bizarre science this, using a brain to do AI

This mathematical trick can help you imagine space-time

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or two to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

Space-time is a curious thing. Look around and it’s easy enough to visualise what the space component is in the abstract. It’s three dimensions: left-right, forwards-backwards and up-down. It’s a graph with an…

x, y and z axis. Time, too, is easy enough. We’re always moving forwards in time so we might visualise it as a straight line or one big arrow. Every second is a little nudge forwards.

But space-time, well that’s a little different. Albert Einstein fused space and time together in his theories of relativity. The outcome was a new fabric of reality, a thing called space-time that permeates the universe. How gravity works popped out of the explorations of this new way of thinking. Rather than gravity being a force that somehow operates remotely through space, Einstein proposed that bodies curve space-time, and it is this curvature that causes them to be gravitationally drawn to each other. Our very best descriptions of the cosmos begin with space-time.

Yet, visualising it is next to impossible. The three dimensions of space and one of time give four dimensions in total. But space-time itself is curved, as Einstein proposed. That means to really imagine it, you need a fifth dimension to curve into.

Luckily, all is not lost. There is a mathematical trick to visualising space-time that I’ve come up with. It’s a simplified way of thinking that not only illustrates how space-time can be curved, but also how such curvature can draw bodies towards each other. It can give you new insight into how gravity works in our cosmos.

First, let’s start with a typical way to draw space-time. Pictures like the one below are meant to illustrate Einstein’s idea that gravity arises in the universe from massive objects distorting space-time. Placing a small object, say a marble, near one of these dimples would result in it rolling towards one of the larger objects, in much the same way that gravity pulls objects together.

 

New Scientist Default Image

The weight of different space objects influences the distortion of space-and-time

Manil Suri

 

However, the diagram is missing a lot. While the objects depicted are three dimensional, the space they’re curving is only two dimensional. Moreover, time seems to have been entirely omitted, so it’s pure space – not space-time – that’s curving.

Here’s my trick to get around this: simplify things by letting space be only one dimensional. This makes the total number of space-time dimensions a more manageable two.

Now we can represent our 1-D space by the double-arrowed horizontal line in the left panel of the diagram below. Let time be represented by the perpendicular direction, giving a two-dimensional space-time plane. This plane is then successive snapshots, stacked one on top of the other, of where objects are located in the single space dimension at each instant.

Suppose now there are objects – say particles – at points A and B in our universe. Then if these particles remained at rest, their trajectories through space-time would just be the two parallel paths AA’ and BB’ as shown. This simply represents the fact that for every time instant, the particles remain exactly where they are in 1-D space. Such behaviour is what we’d expect in the absence of gravity or any other forces.

However, if gravity came into play, we would expect the two particles to draw closer to each other as time went on. In other words, A’ would be much closer to B’ than A was to B.

Now what if gravity, as Einstein proposed, wasn’t a force in the usual sense? What if it couldn’t act directly on A and B to bring them closer, but rather, could only cause such an effect by deforming the 2-D space-time plane? Would there be a suitable such deformation that would still result in A’ getting closer to B’?

New Scientist Default Image

Manil Suri

The answer is yes. Were the plane drawn on a rubber sheet, you could stretch it in various ways to easily verify that many such deformations exist. The one we’ll pick (why exactly, we’ll see below) is to wrap the plane around a sphere, as shown in the middle panel. This can be mathematically accomplished by the same method used to project a rectangular map of the world onto a globe. The formula this involves (called the “equirectangular projection”) has been known for almost two millennia: vertical lines on the rectangle correspond to lines of longitude on the sphere and horizontal ones to lines of latitude. You can see from the right panel that A’ has indeed gotten closer to B’, just as we might expect under gravity.

On the plane, the particles follow the shortest paths between A and A’, and B and B’, respectively. These are just straight lines. On the sphere, the trajectories AA’ and BB’ still represent shortest distance paths. This is because the shortest distance between two points on a spherical surface is always along one of the circles of maximal radius (these include, e.g., lines of longitude and the equator). Such curves that produce the shortest distance are called geodesics. So the geodesics AA’ and BB’ on the plane get transformed to corresponding geodesics on the sphere. (This wouldn’t necessarily happen for an arbitrary deformation, which is why we chose our wrapping around the sphere.)

Einstein postulated that particles not subject to external forces will always move through space-time along such “shortest path” geodesics. In the absence of gravity, these geodesics are just straight lines. Gravity, when introduced, isn’t counted as an external force. Rather, its effect is to curve space-time, hence changing the geodesics. The particles now follow these new geodesics, causing them to draw closer.

This is the key visualisation afforded by our simplified description of space-time. We can begin to understand how gravity, rather than being a force that acts mysteriously at a distance, could really be a result of geometry. How it can act to pull objects together via curvature built into space-time.

The above insight was fundamental to Einstein’s incorporation of gravity into his general theory of relativity. The actual theory is much more complicated, since space-time only curves in the local vicinity of bodies, not globally, as in our model. Moreover, the geometry involved must also respect the fact that nothing can travel faster than the speed of light. This effectively means that the concept of “shortest distance” has to also be modified, with the time dimension having to be treated very differently from the space dimensions.

Nevertheless, Einstein’s explanation posits, for instance, that the sun’s mass curves space-time in our solar system. That is why planets revolve around the sun rather than flying off in straight lines – they are just following the curved geodesics in this deformed space-time.

This has been confirmed by measuring how light from distant astronomical sources gets distorted by massive galaxies. Space-time truly is curved in our universe, it’s not just a mathematical convenience.

There’s a classical Buddhist parable about a group of blind men relying only on touch to figure out an animal unfamiliar to them – an elephant. Space-time is our elephant here – we can never hope to see it in its full 4-D form, or watch it curve to cause gravity. But the simplified visualisation presented here can help us better understand it .

Manil Suri is at the University of Maryland, Baltimore County. His book, The Big Bang of Numbers: How to Build the Universe Using Only Math, is out now.

Source: This mathematical trick can help you imagine space-time | New Scientist

Global Climate Tipping points: threats and opportunities accelerate and going very quickly now. Action is needed.

The world has reached a pivotal moment as threats from Earth system tipping points – and progress towards positive tipping points – accelerate, a new report shows

Story highlights

  • Rapid changes to nature and societies already happening, and more coming
  • The report makes six key recommendations to change course fast
  • A cascade of positive tipping points would save millions of lives

Humanity is currently on a disastrous trajectory, according to the Global Tipping Points report, the most comprehensive assessment of tipping points ever conducted.

The report makes six key recommendations to change course fast, including coordinated action to trigger positive tipping points.

Behind the report is an international team of more than 200 scientists, coordinated by the University of Exeter, in partnership with Bezos Earth Fund. Centre researchers David Armstrong McKay, Steven Lade, Laura Pereira, and Johan Rockström have all contributed to the report.

A tipping point occurs when a small change sparks an often rapid and irreversible transformation, and the effects can be positive or negative.

Based on an assessment of 26 negative Earth system tipping points, the report concludes “business as usual” is no longer possible – with rapid changes to nature and societies already happening, and more coming.

With global warming now on course to breach 1.5°C, at least five Earth system tipping points are likely to be triggered – including the collapse of major ice sheets and widespread mortality of warm-water coral reefs.

As Earth system tipping points multiply, there is a risk of catastrophic, global-scale loss of capacity to grow staple crops. Without urgent action to halt the climate and ecological crisis, societies will be overwhelmed as the natural world comes apart.

Impacts of physical tipping points could trigger social tipping such as financial destabilization, disruption of social cohesion, and violent conflict that would further amplify impacts on people.

Centre researcher Steven Lade

Positive tipping points

But there are ways forward. Emergency global action – accelerated by leaders meeting now at COP28 – can harness positive tipping points and steer us towards a thriving, sustainable future.

The report authors lay out a out a blueprint for doing this, and says bold, coordinated policies could trigger positive tipping points across multiple sectors including energy, transport, and food.

A cascade of positive tipping points would save millions of lives, billions of people from hardship, trillions of dollars in climate-related damage, and begin restoring the natural world upon which we all depend.

Read “The Global Tipping Points Report” »

Six key recommendations on global tipping points

  • Phase out fossil fuels and land-use emissions now, stopping them well before 2050.
  • Strengthen adaptation and “loss and damage” governance, recognising inequality between and within nations.
  • Include tipping points in the Global Stocktake (the world’s climate “inventory”) and Nationally Determined Contributions (each country’s efforts to tackle climate change)
  • Coordinate policy efforts to trigger positive tipping points.
  • Convene an urgent global summit on tipping points.
  • Deepen knowledge of tipping points. The research team supports calls for an IPCC Special Report on tipping points.

Source: New report: Tipping point threats and opportunities accelerate – Stockholm Resilience Centre

This report was released at COP28 and is being taken extremely seriously by scientists and news people alike – as it should be. Stuff really does need to happen and it’s positive that there are possibly points that we can use to tip the balance in our favour.

NB the official site is down with a 503 error currently, but the OECD has a copy of the report online.

Plants may be absorbing 20% more CO2 than we thought, new models find

[…]

Using realistic ecological modeling, scientists led by Western Sydney University’s Jürgen Knauer found that the globe’s vegetation could actually be taking on about 20% more of the CO2 humans have pumped into the atmosphere and will continue to do so through to the end of the century.

“What we found is that a well-established climate model that is used to feed into global climate assessments by the likes of the IPCC (Intergovernmental Panel on Climate Change) predicts stronger and sustained carbon uptake until the end of the 21st century when extended to account for the impact of some critical physiological processes that govern how plants conduct photosynthesis,” said Knauer.

[…]

Current models, the team adds, are not that complex so likely underestimate future CO2 uptake by vegetation.

[…]

Taking the well-established Community Atmosphere-Biosphere Land Exchange model (CABLE), the team accounted for three physiological factors […] the team found that the most complex version, which accounted for all three factors, predicted the most CO2 uptake, around 20% more than the simplest formula.

[…]

“Our understanding of key response processes of the carbon cycle, such as plant photosynthesis, have advanced dramatically in recent years,” said Ben Smith, professor and research director of Western Sydney University’s Hawkesbury Institute for the Environment. “It always takes a while for new knowledge to make it into the sophisticated models we rely on to inform climate and emissions policy. Our study demonstrates that by fully accounting for the latest science in these models can lead to materially different predictions.

[…]

And while it’s somewhat good news, the team says plants can’t be expected to do all the heavy lifting; the onus remains on governments to stick to emission reduction obligations. However, the modeling makes a strong case for the value of greening projects and their importance in comprehensive approaches to tackling global warming.

[…]

Source: Plants may be absorbing 20% more CO2 than we thought, new models find

Limits for quantum computers: Perfect clocks are impossible, research finds

[…]

Every clock has two : a certain precision and a certain time resolution. The time resolution indicates how small the time intervals are that can be measured—i.e., how quickly the clock ticks. Precision tells you how much inaccuracy you have to expect with every single tick.

The research team was able to show that since no clock has an infinite amount of energy available (or generates an infinite amount of entropy), it can never have perfect resolution and perfect precision at the same time. This sets fundamental limits to the possibilities of quantum computers.

[…]

Marcus Huber and his team investigated in general which laws must always apply to every conceivable clock. “Time measurement always has to do with entropy,” explains Marcus Huber. In every closed physical system, entropy increases and it becomes more and more disordered. It is precisely this development that determines the direction of time: the future is where the entropy is higher, and the past is where the entropy is even lower.

As can be shown, every measurement of time is inevitably associated with an increase in entropy: a clock, for example, needs a battery, the energy of which is ultimately converted into frictional heat and audible ticking via the clock’s mechanics—a process in which a fairly ordered state occurs the battery is converted into a rather disordered state of heat radiation and sound.

On this basis, the research team was able to create a that basically every conceivable clock must obey. “For a given increase in , there is a tradeoff between and precision,” says Florian Meier, first author of the second paper, now posted to the arXiv preprint server. “That means: Either the clock works quickly or it works precisely—both are not possible at the same time.”

[…]

“Currently, the accuracy of quantum computers is still limited by other factors, for example, the precision of the components used or electromagnetic fields. But our calculations also show that today we are not far from the regime in which the fundamental limits of time measurement play the decisive role.”

[…]

More information: Florian Meier et al, Fundamental accuracy-resolution trade-off for timekeeping devices, arXiv (2023). DOI: 10.48550/arxiv.2301.05173

Source: Limits for quantum computers: Perfect clocks are impossible, research finds

Toxic air killed more than 500,000 people in EU in 2021, data shows

Dirty air killed more than half a million people in the EU in 2021, estimates show, and about half of the deaths could have been avoided by cutting pollution to the limits recommended by doctors.

The researchers from the European Environment Agency attributed 253,000 early deaths to concentrations of fine particulates known as PM2.5 that breached the World Health Organization’s maximum guideline limits of 5µg/m3. A further 52,000 deaths came from excessive levels of nitrogen dioxide and 22,000 deaths from short-term exposure to excessive levels of ozone.

“The figures released today by the EEA remind us that air pollution is still the number one environmental health problem in the EU,” said Virginijus Sinkevičius, the EU’s environment commissioner.

Doctors say air pollution is one of the biggest killers in the world but death tolls will drop quickly if countries clean up their economies. Between 2005 and 2021, the number of deaths from PM2.5 in the EU fell 41%, and the EU aims to reach 55% by the end of the decade.

[…]

Source: Toxic air killed more than 500,000 people in EU in 2021, data shows | Air pollution | The Guardian

Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Researchers at the Zurich-based ETH public university, along with a US-based startup called Inkbit, have done the impossible. They’ve printed a robot hand complete with bones, ligaments and tendons for the very first time, representing a major leap forward in 3D printing technology. It’s worth noting that the various parts of the hand were printed simultaneously, and not cobbled together after the fact, as indicated in a research journal published in Nature.

Each of the robotic hand’s various parts were made from different polymers of varying softness and rigidity, using a new laser-scanning technique that lets 3D printers create “special plastics with elastic qualities” all in one go. This obviously opens up new possibilities in the fast-moving field of prosthetics, but also in any field that requires the production of soft robotic structures.

Basically, the researchers at Inkbit developed a method to 3D print slow-curing plastics, whereas the technology was previously reserved for fast-curing plastics. This hybrid printing method presents all kinds of advantages when compared to standard fast-cure projects, such as increased durability and enhanced elastic properties. The tech also allows us to mimic nature more accurately, as seen in the aforementioned robotic hand.

“Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” ETH Zurich robotics professor Robert Katzschmann writes in the study.

A robot dog or a pulley or something.
ETH Zurich/Thomas Buchner

This advancement still prints layer-by-layer, but an integrated scanner constantly checks the surface for irregularities before telling the system to move onto the next material type. Additionally, the extruder and scraper have been updated to allow for the use of slow-curing polymers. The stiffness can be fine-tuned for creating unique objects that suit various industries. Making human-like appendages is one use case scenario, but so is manufacturing objects that soak up noise and vibrations.

MIT-affiliated startup Inkbit helped develop this technology and has already begun thinking about how to make money off of it. The company will soon start to sell these newly-made printers to manufacturers but will also sell complex 3D-printed objects that make use of the technology to smaller entities.

Source: Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Researchers use magnetic fields for non-invasive blood glucose monitoring

Synex Medical, a Toronto-based biotech research firm backed by Sam Altman (the CEO of OpenAI), has developed a tool that can measure your blood glucose levels without a finger prick. It uses a combination of low-field magnets and low-frequency radio waves to directly measure blood sugar levels non-invasively when a user inserts a finger into the device.

The tool uses magnetic resonance spectroscopy (MRS), which is similar to an MRI. Jamie Near, an Associate Professor at the University of Toronto who specializes in the research of MRS technology told Engadget that, “[an] MRI uses magnetic fields to make images of the distribution of hydrogen protons in water that is abundant in our body tissues. In MRS, the same basic principles are used to detect other chemicals that contain hydrogen.” When a user’s fingertip is placed inside the magnetic field, the frequency of a specific molecule, in this case glucose, is measured in parts per million. While the focus was on glucose for this project, MRS could be used to measure metabolites, according to the Synex, including lactate, ketones and amino acids.

[…]

“MRI machines can fit an entire human body and have been used to target molecule concentrations in the brain through localized spectroscopy,” he explained. “Synex has shrunk this technology to measure concentrations in a finger. I have reviewed their white paper and seen the instrument work.” Simpson said Synex’s ability to retrofit MRS technology into a small box is an engineering feat.

[…]

But there is competition in the space for no-prick diagnostics tools. Know Labs is trying to get approval for a portable glucose monitor that relies on a custom-made Bio-RFID sensing technology, which uses radio waves to detect blood glucose levels in the palm of your hand. When the Know Labs device was tested up against a Dexcom G6 continuous glucose monitor in a study, readings of blood glucose levels using its palm sensor technology were “within threshold” only 46 percent of the time. While the readings are technically in accordance with FDA accuracy limits for a new blood glucose monitor, Know Labs is still working out kinks through scientific research before it can begin FDA clinical trials.

Another start-up, German company DiaMonTech, is currently developing a pocket-sized diagnostic device that is still being tested and fine-tuned to measure glucose through “photothermal detection.” It uses mid-infrared lasers that essentially scan the tissue fluid at the fingertip to detect glucose molecules. CNBC and Bloomberg reported that even Apple has been “quietly developing” a sensor that can check your blood sugar levels through its wearables, though the company never confirmed. A scientific director at Synex, Mohana Ray, told Engadget that eventually, the company would like to develop a wearable. But further miniaturization was needed before they could bring a commercial product to market.

[…]

Source: Researchers use magnetic fields for non-invasive blood glucose monitoring

Three thousand years’ worth of carbon monoxide records show positive impact of global intervention in the 1980s

An international team of scientists has reconstructed a historic record of the atmospheric trace gas carbon monoxide by measuring air in polar ice and air collected at an Antarctic research station.

 

The team, led by the French National Centre for Scientific Research (CNRS) and Australia’s national science agency, CSIRO, assembled the first complete record of concentrations in the southern hemisphere, based on measurements of air.

The findings are published in the journal Climate of the Past.

The record spans the last three millennia. CSIRO atmospheric scientist Dr. David Etheridge said that the record provides a rare positive story in the context of climate change.

“Atmospheric monoxide started climbing from its natural background level around the time of the industrial revolution, accelerating in the mid-1900s and peaking in the early-mid 1980s,” Dr. Etheridge said.

“The good news is that levels of the trace gas are now stable or even trending down and have been since the late 1980s—coinciding with the introduction of catalytic converters in cars.”

Carbon monoxide is a reactive gas that has important indirect effects on . It reacts with hydroxyl (OH) radicals in the atmosphere, reducing their abundance. Hydroxyl acts as a natural “detergent” for the removal of other gases contributing to climate change, including methane. Carbon monoxide also influences the levels of ozone in the lower atmosphere. Ozone is a greenhouse gas.

The authors have high confidence that a major cause of the late 1980s-decline was improved combustion technologies including the introduction of , an exhaust systems device used in vehicles.

“The stabilization of carbon monoxide concentrations since the 1980s is a fantastic example of the role that science and technology can play in helping us understand a problem and help address it,” Dr. Etheridge said.

[…]

“Because carbon monoxide is a reactive gas, it is difficult to measure long term trends because it is unstable in many air sample containers. Cold and clean however preserves carbon monoxide concentrations for millennia,” Dr. Etheridge said.

The CO data will be used to improve Earth systems models. This will primarily enable scientists to understand the effects that future emissions of CO and other gases (such as hydrogen) will have on pollution levels and climate as the global energy mix changes into the future.

More information: Xavier Faïn et al, Southern Hemisphere atmospheric history of carbon monoxide over the late Holocene reconstructed from multiple Antarctic ice archives, Climate of the Past (2023). DOI: 10.5194/cp-19-2287-2023

Source: Three thousand years’ worth of carbon monoxide records show positive impact of global intervention in the 1980s

In a surprising finding, light can make water evaporate without heat

[…]

In recent years, some researchers have been puzzled upon finding that water in their experiments, which was held in a sponge-like material known as a hydrogel, was evaporating at a higher rate than could be explained by the amount of heat, or thermal energy, that the water was receiving. And the excess has been significant — a doubling, or even a tripling or more, of the theoretical maximum rate.

After carrying out a series of new experiments and simulations, and reexamining some of the results from various groups that claimed to have exceeded the thermal limit, a team of researchers at MIT has reached a startling conclusion: Under certain conditions, at the interface where water meets air, light can directly bring about evaporation without the need for heat, and it actually does so even more efficiently than heat. In these experiments, the water was held in a hydrogel material, but the researchers suggest that the phenomenon may occur under other conditions as well.

The findings are published this week in a paper in PNAS, by MIT postdoc Yaodong Tu, professor of mechanical engineering Gang Chen, and four others.

[…]

The new findings come as a surprise because water itself does not absorb light to any significant degree. That’s why you can see clearly through many feet of clean water to the surface below. So, when the team initially began exploring the process of solar evaporation for desalination, they first put particles of a black, light-absorbing material in a container of water to help convert the sunlight to heat.

Then, the team came across the work of another group that had achieved an evaporation rate double the thermal limit — which is the highest possible amount of evaporation that can take place for a given input of heat, based on basic physical principles such as the conservation of energy. It was in these experiments that the water was bound up in a hydrogel. Although they were initially skeptical, Chen and Tu starting their own experiments with hydrogels, including a piece of the material from the other group. “We tested it under our solar simulator, and it worked,” confirming the unusually high evaporation rate, Chen says. “So, we believed them now.” Chen and Tu then began making and testing their own hydrogels.

[…]

The researchers subjected the water surface to different colors of light in sequence and measured the evaporation rate. They did this by placing a container of water-laden hydrogel on a scale and directly measuring the amount of mass lost to evaporation, as well as monitoring the temperature above the hydrogel surface. The lights were shielded to prevent them from introducing extra heat. The researchers found that the effect varied with color and peaked at a particular wavelength of green light. Such a color dependence has no relation to heat, and so supports the idea that it is the light itself that is causing at least some of the evaporation.

 

Animation shows evaporating by white condensation on glass under green light.
The puffs of white condensation on glass is water being evaporated from a hydrogel using green light, without heat.

Image: Courtesy of the researchers

 

The researchers tried to duplicate the observed evaporation rate with the same setup but using electricity to heat the material, and no light. Even though the thermal input was the same as in the other test, the amount of water that evaporated never exceeded the thermal limit. However, it did so when the simulated sunlight was on, confirming that light was the cause of the extra evaporation.

Though water itself does not absorb much light, and neither does the hydrogel material itself, when the two combine they become strong absorbers, Chen says. That allows the material to harness the energy of the solar photons efficiently and exceed the thermal limit, without the need for any dark dyes for absorption.

Having discovered this effect, which they have dubbed the photomolecular effect, the researchers are now working on how to apply it to real-world needs.

[…]

 

Source: In a surprising finding, light can make water evaporate without heat | MIT News | Massachusetts Institute of Technology

Library of Babel Online – all books ever written or ever to be written, all images ever created or ever to be created can be found here

The Library of Babel is a place for scholars to do research, for artists and writers to seek inspiration, for anyone with curiosity or a sense of humor to reflect on the weirdness of existence – in short, it’s just like any other library. If completed, it would contain every possible combination of 1,312,000 characters, including lower case letters, space, comma, and period. Thus, it would contain every book that ever has been written, and every book that ever could be – including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on. At present it contains all possible pages of 3200 characters, about 104677 books.

Since I imagine the question will present itself in some visitors’ minds (a certain amount of distrust of the virtual is inevitable) I’ll head off any doubts: any text you find in any location of the library will be in the same place in perpetuity. We do not simply generate and store books as they are requested – in fact, the storage demands would make that impossible. Every possible permutation of letters is accessible at this very moment in one of the library’s books, only awaiting its discovery. We encourage those who find strange concatenations among the variations of letters to write about their discoveries in the forum, so future generations may benefit from their research.

Source: About the Library

‘Super Melanin’ Speeds Healing, Stops Sunburn, and More

A team of scientists at Northwestern University has developed a synthetic version of melanin that could have a million and one uses. In new research, they showed that their melanin can prevent blistering and accelerate the healing process in tissue samples of freshly injured human skin. The team now plans to further develop their “super melanin” as both a medical treatment for certain skin injuries and as a potential sunscreen and anti-aging skincare product.

[…] Most people might recognize melanin as the main driver of our skin color, or as the reason why some people will tan when exposed to the sun’s harmful UV rays. But it’s a substance with many different functions across the animal kingdom. It’s the primary ingredient in the ink produced by squids; it’s used by certain microbes to evade a host’s immune system; and it helps create the iridescence of some butterflies. A version of melanin produced by our brain cells might even protect us from neurodegenerative conditions like Parkinson’s.

[…]

Their latest work was published Thursday in the Nature Journal npj Regenerative Medicine. In the study, they tested the melanin on both mice and donated human skin tissue samples that had been exposed to potentially harmful things (the skin samples were exposed to toxic chemicals, while the mice were exposed to chemicals and UV radiation). In both scenarios, the melanin reduced or even entirely prevented the damage to the top and underlying layers of skin that would have been expected. It seemed to do this mainly by vacuuming up the damaging free radicals generated in the skin by these exposures, which in turn reduced inflammation and generally sped up the healing process.

The team’s creation very closely resembles natural melanin, to the extent that it seems to be just as biodegradable and nontoxic to the skin as the latter (in experiments so far, it doesn’t appear to be absorbed into the body when applied topically, further reducing any potential safety risks). But the ability to apply as much of their melanin as needed means that it could help repair skin damage that might otherwise overwhelm our body’s natural supply. And their version has been tweaked to be more effective at its job than usual.

[…]

It could have military applications—one line of research is testing whether the melanin can be used as a protective dye in clothing that would absorb nerve gas and other environmental toxins.

[…]

On the clinical side, they’re planning to develop the synthetic melanin as a treatment for radiation burns and other skin injuries. And on the cosmetic side, they’d like to develop it as an ingredient for sunscreens and anti-aging skincare products.

[…]

all of those important mechanisms we’re seeing [from the clinical research] are the same things that you look for in an ideal profile of an anti-aging cream, if you will, or a cream that tries to repair the skin.”

[…]

Source: ‘Super Melanin’ Speeds Healing, Stops Sunburn, and More

Scientists create world’s most water-resistant surface

[…]

A research team in Finland, led by Robin Ras, from Aalto University, and aided by researchers from the University of Jyväskylä, has developed a mechanism to make water droplets slip off surfaces with unprecedented efficacy.

Cooking, transportation, optics and hundreds of other technologies are affected by how water sticks to surfaces or slides off them, and adoption of water-resistant surfaces in the future could improve many household and industrial technologies, such as plumbing, shipping and the auto industry.

The research team created solid silicon surfaces with a “liquid-like” outer layer that repels water by making droplets slide off surfaces. The highly mobile topcoat acts as a lubricant between the product and the water droplets.

The discovery challenges existing ideas about friction between solid surfaces and water, opening a new avenue for studying slipperiness at the molecular level.

Sakari Lepikko, the lead author of the study, which was published in Nature Chemistry on Monday, said: “Our work is the first time that anyone has gone directly to the nanometer-level to create molecularly heterogeneous surfaces.”

By carefully adjusting conditions, such as temperature and water content, inside a reactor, the team could fine-tune how much of the silicon surface the monolayer covered.

Ras said: “I find it very exciting that by integrating the reactor with an ellipsometer, that we can watch the self-assembled monolayers grow with extraordinary level of detail.

“The results showed more slipperiness when SAM [self-assembled monolayer] coverage was low or high, which are also the situations when the surface is most homogeneous. At low coverage, the silicon surface is the most prevalent component, and at high, SAMs are the most prevalent.”

Lepikko added: “It was counterintuitive that even low coverage yielded exceptional slipperiness.”

Using the new method, the team ended up creating the slipperiest liquid surface in the world.

According to Lepikko, the discovery promises to have implications wherever droplet-repellent surfaces are needed. This covers hundreds of examples from daily life to industrial environments.

[…]

“The main issue with a SAM coating is that it’s very thin, and so it disperses easily after physical contact. But studying them gives us fundamental scientific knowledge which we can use to create durable practical applications,” Lepikko said.

[…]

Source: Scientists create world’s most water-resistant surface | Materials science | The Guardian

Spacecraft re-entry filling the atmosphere with metal vapor – and there will be more of it coming in

A group of scientists studying the effects of rocket and satellite reentry vaporization in Earth’s atmosphere have found some startling evidence that could point to disastrous environmental effects on the horizon.

The study, published in the Proceedings of the National Academy of Sciences, found that around 10 percent of large (>120 nm) sulfuric acid particles in the stratosphere contain aluminum and other elements consistent with the makeup of alloys used in spacecraft construction, including lithium, copper and lead. The other 90 percent comes from “meteoric smoke,” which are the particles left over when meteors vaporize during atmospheric entry, and that naturally-occurring share is expected to plummet drastically.

“The space industry has entered an era of rapid growth,” the boffins said in their paper, “with tens of thousands of small satellites planned for low earth orbit.

“It is likely that in the next few decades, the percentage of stratospheric sulfuric acid particles that contain aluminum and other metals from satellite reentry will be comparable to the roughly 50 percent that now contain meteoric metals,” the team concluded.

Atmospheric circulation at those altitudes (beginning somewhere between four and 12 miles above ground level and extending up to 31 miles above Earth) means such particles are unlikely to have an effect on the surface environment or human health, the researchers opined.

Stratospheric changes might be even scarier, though

Earth’s stratosphere has classically been considered pristine, said Dan Cziczo, one of the study’s authors and head of Purdue University’s department of Earth, atmospheric and planetary studies. “If something is changing in the stratosphere – this stable region of the atmosphere – that deserves a closer look.”

One of the major features of the stratosphere is the ozone layer, which protects Earth and its varied inhabitants from harmful UV radiation. It’s been harmed by human activity before action was taken, and an increase in aerosolized spacecraft particles could have several consequences to our planet.

One possibility is effects on the nucleation of ice and nitric acid trihydrate, which form in stratospheric clouds over Earth’s polar regions where currents in the mesosphere (the layer above the stratosphere) tend to deposit both meteoric and spacecraft aerosols.

Ice formed in the stratosphere doesn’t necessarily reach the ground, and is more likely to have effects on polar stratospheric clouds, lead author and National Oceanic and Atmospheric Administration scientists Daniel Murphy told The Register.

“Polar stratospheric clouds are involved in the chemistry of the ozone hole,” Murphy said. However, “it is too early to know if there is any impact on ozone chemistry,” he added

Along with changes in atmospheric ice formation and the ozone layer, the team said that more aerosols from vaporized spacecraft could change the stratospheric aerosol layer, something that scientists have proposed seeding in order to block more UV rays to fight the effects of global warming.

The materials being injected from spacecraft reentry is much smaller than amounts scientists have considered for intentional injection, Murphy told us. However, “intentional injection of exotic materials into the stratosphere could raise many of the same questions [as the paper] on an even bigger scale,” he noted.

[…]

Source: Spacecraft re-entry filling the atmosphere with metal vapor • The Register