Researchers in Japan have developed a plastic that dissolves in seawater within hours, offering up a potential solution for a modern-day scourge polluting oceans and harming wildlife.
While scientists have long experimented with biodegradable plastics, researchers from the RIKEN Center for Emergent Matter Science and the University of Tokyo say their new material breaks down much more quickly and leaves no residual trace.
[…]
Aida said the new material is as strong as petroleum-based plastics but breaks down into its original components when exposed to salt. Those components can then be further processed by naturally occurring bacteria, thereby avoiding generating microplastics that can harm aquatic life and enter the food chain.
As salt is also present in soil, a piece about five centimetres (two inches) in size disintegrates on land after over 200 hours, he added.
The material can be used like regular plastic when coated, and the team are focusing their current research on the best coating methods, Aida said. The plastic is non-toxic, non-flammable, and does not emit carbon dioxide, he added.
Researchers have identified a type of chemical compound that, when applied to insecticide-treated bed nets, appears to kill the malaria-causing parasite in mosquitoes.
Published in the journal Nature, the multi-site collaborative study represents a breakthrough for a disease that continues to claim more than half a million lives worldwide every year.
[…]
ELQ drugs refer to a class of experimental antimalarial drugs known as endochin-like quinolones.
“It was a very clever and novel idea by Dr. Catteruccia and her colleagues to incorporate anti-malarial drugs into bed nets and then to see if the mosquitoes would land on the nets and take up the drug,” Riscoe said. “The idea is the drug kills the parasites that cause malaria instead of the mosquitoes, and our data shows this works.”
Risco said further research is necessary to determine whether the best strategy in the field is to incorporate the antimalarial ELQs together with insecticides in the fibers that are woven into bed nets or simply to use them alone to blunt disease transmission.
[…]
“Insecticide resistance is now extremely common in the mosquitoes that transmit malaria, which jeopardizes many of our most effective control tools,” said Alexandra Probst, M.Pharm, lead author of the study and a Ph.D. candidate in Catteruccia’s lab at Harvard.
“By targeting malaria-causing parasites directly in the mosquito, rather than the mosquito itself, we can circumvent this challenge and continue to reduce the spread of malaria.”
[…]
More information: Alexandra S. Probst et al, In vivo screen of Plasmodium targets for mosquito-based malaria control, Nature (2025). DOI: 10.1038/s41586-025-09039-2
An anonymous reader shared this report from The Hill: A new study indicates the debilitating “brain fog” suffered by millions of long COVID patients is linked to changes in the brain, including inflammation and an impaired ability to rewire itself following COVID-19 infection. United Press International reported this week that the small-scale study, conducted by researchers at Corewell Health in Grand Rapids, Michigan, and Michigan State University, shows that altered levels of a pair of key brain chemicals could be the culprit.
The study marks the first time doctors have been able to provide scientific proof that validates the experiences of the approximately 12 million COVID “long-haulers” in the U.S. who have reported neurological symptoms. Researchers looked at biomarkers in study participants and found that those complaining of brain fog had higher levels of an anti-inflammatory protein that is crucial to regulating a person’s immune system, UPI reported. They also showed lower serum levels of nerve growth factor, a protein vital to the brain’s plasticity…
One of the biggest issues involving long COVID has been doctors’ inability to find physical proof of the symptoms described by patients. The study has changed that, according to co-author Dr. Bengt Arnetz.
One of the ultimate goals of medieval alchemy has been realized, but only for a fraction of a second. Scientists with the European Organization for Nuclear Research, better known as CERN, were able to convert lead into gold using the Large Hadron Collider (LHC), the world’s most powerful particle accelerator. Unlike the examples of transmutation we see in pop culture, these experiments with the LHC involve smashing subatomic particles together at ridiculously high speeds to manipulate lead’s physical properties to become gold.
The LHC is often used to smash lead ions together to create extremely hot and dense matter similar to what was observed in the universe following the Big Bang. While conducting this analysis, the CERN scientists took note of the near-misses that caused a lead nucleus to drop its neutrons or protons. Lead atoms only have three more protons than gold atoms, meaning that in certain cases the LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles.
Alchemists back in the day may be astonished by this achievement, but the experiments conducted between 2015 and 2018 only produced about 29 picograms of gold, according to CERN. The organization added that the latest trials produced almost double that amount thanks to regular upgrades to the LHC, but the mass made is still trillions of times less than what’s necessary for a piece of jewelry. Instead of trying to chase riches, the organization’s scientists are more interested in studying the interaction that leads to this transmutation.
Scientists from The University of Manchester have changed our understanding of how cells in living organisms divide, which could revise what students are taught at school.
In a Wellcome funded study published today (01/05/25) in Science – one of the world’s leading scientific journals – the researchers challenge conventional wisdom taught in schools for over 100 years.
Students are currently taught that during cell division, a ‘parent’ cell will become spherical before splitting into two ‘daughter’ cells of equal size and shape.
However, the study reveals that cell rounding is not a universal feature of cell division and is not how it often works in the body.
Dividing cells, they show, often don’t round up into sphere-like shapes. This lack of rounding breaks the symmetry of division to generate two daughter cells that differ from each other in both size and function, known as asymmetric division.
Asymmetric divisions are an important way that the different types of cells in the body are generated, to make different tissues and organs.
Until now, asymmetric cell division has predominantly only been associated with highly specialised cells, known as stem cells.
The scientists found that it is the shape of a parent cell before it even divides that can determine if they will round or not in division and determines how symmetric, or not, its daughter cells are going to be.
Cells which are shorter and wider in shape tend to round up and divide into two cells which are similar to each other. However, cells which are longer and thinner don’t round up and divide asymmetrically, so that one
daughter is different to the other.
The findings could have far reaching implications on our understanding of the role of cell division in disease. For example, in the context of cancer cells, this type of ‘non-round’, asymmetric division could generate different cell behaviours known to promote cancer progression through metastasis.
Marking a breakthrough in the field of brain-computer interfaces (BCIs), a team of researchers from UC Berkeley and UC San Francisco has unlocked a way to restore naturalistic speech for people with severe paralysis.
This work solves the long-standing challenge of latency in speech neuroprostheses, the time lag between when a subject attempts to speak and when sound is produced. Using recent advances in artificial intelligence-based modeling, the researchers developed a streaming method that synthesizes brain signals into audible speech in near-real time.
As reported today in Nature Neuroscience, this technology represents a critical step toward enabling communication for people who have lost the ability to speak. […]
we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.
[…]
The researchers also showed that their approach can work well with a variety of other brain sensing interfaces, including microelectrode arrays (MEAs) in which electrodes penetrate the brain’s surface, or non-invasive recordings (sEMG) that use sensors on the face to measure muscle activity.
“By demonstrating accurate brain-to-voice synthesis on other silent-speech datasets, we showed that this technique is not limited to one specific type of device,” said Kaylo Littlejohn, Ph.D. student at UC Berkeley’s Department of Electrical Engineering and Computer Sciences and co-lead author of the study. “The same algorithm can be used across different modalities provided a good signal is there.”
[…]
the neuroprosthesis works by sampling neural data from the motor cortex, the part of the brain that controls speech production, then uses AI to decode brain function into speech.
“We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,” he said. “So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles.”
A new study found that a gene recently recognized as a biomarker for Alzheimer’s disease is actually a cause of it, due to its previously unknown secondary function. Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer’s disease and discover a potential treatment that obstructs the gene’s moonlighting role.
[…]
hong and his team took a closer look at phosphoglycerate dehydrogenase (PHGDH), which they had previously discovered as a potential blood biomarker for early detection of Alzheimer’s disease. In a follow-up study, they later found that expression levels of the PHGDH gene directly correlated with changes in the brain in Alzheimer’s disease; in other words, the higher the levels of protein and RNA produced by the PHGDH gene, the more advanced the disease.
[…]
Using mice and human brain organoids, the researchers found that altering the amounts of PHGDH expression had consequential effects on Alzheimer’s disease: lower levels corresponded to less disease progression, whereas increasing the levels led to more disease advancement. Thus, the researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer’s disease.
In further support of that finding, the researchers determined—with the help of AI—that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer’s disease.
[…]
another Alzheimer’s project in his lab, which did not focus on PHGDH, changed all this. A year ago, that project revealed a hallmark of Alzheimer’s disease: a widespread imbalance in the brain in the process where cells control which genes are turned on and off to carry out their specific roles.
The researchers were curious if PHGDH had an unknown regulatory role in that process, and they turned to modern AI for help.
With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>
Zhong said, “It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery.”
After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer’s disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer’s disease.
That ties back to the team’s earlier studies: the PHGDH gene produced more proteins in the brains of Alzheimer’s patients compared to the control brains, and those increased amounts of the protein in the brain triggered the imbalance. While everyone has the PHGDH gene, the difference comes down to the expression level of the gene, or how many proteins are made by it.
[…]
Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH’s enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic.
They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH’s regulatory role.
When the researchers tested NCT-503 in two mouse models of Alzheimer’s disease, they saw that it significantly alleviated Alzheimer’s progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests. These tests were chosen because Alzheimer’s patients suffer from cognitive decline and increased anxiety.
The researchers do acknowledge limitations of their study. One being that there is no perfect animal model for spontaneous Alzheimer’s disease. They could test NCT-503 only in the mouse models that are available, which are those with mutations in those known disease-causing genes.
Still, the results are promising, according to Zhong.
A report by the Pesticides Action Network (PAN Europe) and other NGOs that uncovered high concentrations of a forever chemical in wines from across the EU – including organic – is sparking debate about the causes of contamination and restrictions on the substance.
The report found some wines had trifluoroacetic acid (TFA) levels 100 times higher than the strictest threshold for drinking water in Europe.
TFA is part of the PFAS (per- and polyfluoroalkyl) family of substances used in many products, including pesticides, for their water-repellent properties. Extremely persistent in the environment, they are a known threat to human health.
“This is a wake-up call,” said Helmut Burtscher-Schaden, an environmental chemist at Global 2000, one of the NGOs behind the research. “TFA is a permanent chemical and will not go away.”
The NGOs analysed 49 wines. Comparing modern wines with older vintages, the findings suggested no detectable residues in pre-1988 wines but a sharp increase since 2010.
“For no other agricultural product are the harvests from past decades so readily available and well-preserved,” the study said.
PAN sees a correlation between rising levels of TFA in wine and the growing use PFAS-based pesticides.
Under the spotlight
Though nearly a quarter of Austria’s vineyards are cultivated with the organic method, Austrian bottles are over-represented in the list of contaminated wines, 18 out of 49, as the NGOs started testing from the country before expanding the reach of the research.
[… Winemakers complain about the study, who would have thought…]
In response, the European executive’s officials passed the buck to member states, noting they resisted the Commission’s proposal to quit renewing certain PFAS pesticides. An eventual agreement was reached on just two substances.
More could be done to limit PFAS chemicals at the national level under the current EU legislation, Commission representatives said.
Toothpaste can be widely contaminated with lead and other dangerous heavy metals, new research shows.
Most of 51 brands of toothpaste tested for lead contained the dangerous heavy metal, including those for children or those marketed as green. The testing, conducted by Lead Safe Mama, also found concerning levels of highly toxic arsenic, mercury and cadmium in many brands.
About 90% of toothpastes contained lead, 65% contained arsenic, just under half contained mercury, and one-third had cadmium. Many brands contain a number of the toxins.
The highest levels detected violated the state of Washington’s limits, but not federal limits. The thresholds have been roundly criticized by public health advocates for not being protective – no level of exposure to lead is safe, the federal government has found.
“It’s unconscionable – especially in 2025,” said Tamara Rubin, Lead Safe Mama’s founder. “What’s really interesting to me is that no one thought this was a concern.”
Lead can cause cognitive damage to children, harm the kidneys and cause heart disease, among other issues. Lead, mercury, cadmium and arsenic are all carcinogens.
Rubin first learned that lead-contaminated ingredients were added to toothpaste about 12 years ago while working with families that had children with high levels of the metal in their blood. The common denominator among them was a brand of toothpaste, Earthpaste, that contained lead.
Last year she detected high levels in some toothpaste using an XRF lead detection tool. The levels were high enough to raise concern, and she crowdfunded with readers to send popular brands to an independent laboratory for testing.
Among those found to contain the toxins were Crest, Sensodyne, Tom’s of Maine, Dr Bronner’s, Davids, Dr Jen and others.
So far, none of the companies Lead Safe Mama checked have said they will work to get lead out of their product, Rubin said. Several sent her cease-and-desist letters, which she said she ignored, but also posted on her blog.
[…] Scientists at Stanford University led the research, published in Nature. They compared people born before and after they were eligible to take the shingles vaccine in a certain part of the UK, finding that vaccinated people were 20% less likely to be diagnosed with dementia over a seven year period. More research is needed to understand and confirm this link, but the findings suggest shingles vaccination can become a cost-effective preventative measure against dementia.
[…]
the researchers took advantage of a natural experiment that occurred in Wales, UK, over a decade ago. In September 2013, a shingles vaccination program officially began in Wales, with a well-defined age eligibility. People born on or after September 2, 1933 (80 years and under) were eligible for at least one year for the shingles vaccine, whereas people born before then were not.
The clear cutoff date (and the UK’s well-maintained electronic health records) meant that the researchers could easily track dementia rates across the two groups born before or after September 1933. And because the people in these groups were so close together in age, they also shared many other factors in common that could potentially affect dementia risk, such as how often they saw doctors regularly. This divide, in other words, allowed the researchers to study older people in Wales during this time in a manner similar to a randomized trial.
The researchers analyzed the health records of 280,000 residents born between 1925 and 1942. As expected, many vaccine-eligible people immediately took advantage of the new program: 47% of people born after the first week of the eligibility date were vaccinated, while practically no one born before the cutoff date received the vaccine, the researchers noted.
All in all, the researchers calculated that shingles vaccination in Wales was associated with a 20% decline in people’s relative risk of developing dementia over a seven-year period (in absolute terms, people’s risk of dementia dropped by 3.5%). They also analyzed data from England, where a similar cutoff period was enacted, and found the same pattern of reduced dementia risk (and deaths related to dementia) among those vaccinated against shingles.
[…]
“For the first time, we now have evidence that likely shows a cause-and-effect relationship between shingles vaccination and dementia prevention,” Geldsetzer said. “We find these protective effects to be large in size—substantially larger than those of existing pharmacological tools for dementia.”
There are still unanswered aspects about this link. Researchers aren’t sure exactly why the vaccine seems to lower dementia risk, for instance. Some but not all studies have suggested that herpes zoster and other germs that linger in our bodies can overtly cause or worsen people’s dementia, so the vaccine might be having a direct preventative effect there. But it’s also possible the vaccine is triggering changes in the immune system that more broadly keep the brain sharper, and that other vaccines could do the same as well.
Importantly, this latest study only looked at the earlier Zostavax vaccine, which has largely been replaced by the more effective Shingrix vaccine. This might mean that the results seen here are an underestimate of the benefits people can expect today. Just last July, for instance, a study from researchers in the UK found evidence that the Shingrix vaccine reduced people’s risk of dementia noticeably more than Zostavax. This finding, if further supported, would also support the idea that the herpes zoster virus is contributing to dementia.
AtmoSense, which began in late 2020, set out to understand the fundamentals of energy propagation from the Earth’s surface to the ionosphere to determine whether the atmosphere can be used as a sensor. A fundamental science effort, AtmoSense aimed to measure acoustic and electromagnetic waves propagating through the atmosphere to see if they could provide clues about the nature, location, and size of a disturbance event that occurred on Earth. Precisely locating illicit underground explosions by a rogue nation or identifying other national security-relevant events could be done in the future just by using signals detected and modeled from the atmosphere. The open-source tools developed under AtmoSense may be the first step toward “reading” — from extended distances — information contained in atmospheric waves propagating from an event happening anywhere in the world.
Benefits for a range of computationally complex problems
“High-resolution surface-to-space simulation of acoustic waves was considered impossible before the program began, but we accomplished it,” said Michael “Orbit” Nayak, DARPA AtmoSense program manager. “We used to call the ionosphere the ‘ignorosphere,’ but AtmoSense made some key interdisciplinary breakthroughs to address what used to be a massively intractable problem. We can now model across six orders of magnitude, in 3D, what happens to the energy emanating from a small, meters-scale disturbance as it expands up into the atmosphere to propagate over thousands of kilometers, and potentially around the world.”
[…]
An unplanned discovery: SpaceX Falcon 9 re-entries detected
Following one of the New Mexico test-range detonations in 2024, a performer team noticed something unusual in their analysis of sensor data.
“As the team was looking at the data, they saw a huge drop in what’s called total electron content that puzzled them,” Nayak said. “Imagine that you have water going through a hose. That’s a flow of electrons, and if you put your fist in front of the hose, you’ll notice a significant drop in water volume coming out of the hose.”
In preparing to analyze their field test data, the team noticed a similar sizable dip in the electron content compared to the background electron readings at a specific location in the atmosphere. As they did more forensics, they correlated the disturbance to a SpaceX Falcon 9 re-entry that happened the same day of the detonation test. Their sensor data had unexpectedly captured the SpaceX reentry into the atmosphere, resulting in the specific drop in electron content.
“Then they decided to pull other SpaceX reentry data, across dozens of launches, to see if they could spot a similar electron drop,” Nayak said. “The phenomenon is highly repeatable. We discovered an unplanned new technique for identifying objects entering the earth’s atmosphere.” The Embry-Riddle University team, led by Jonathan Snively and Matt Zettergren, in collaboration with Pavel Inchin of Computational Physics, Inc., have submitted their novel results for peer-reviewed publication.
Rice University researchers have developed an innovative solution to a pressing environmental challenge: removing and destroying per- and polyfluoroalkyl substances (PFAS), commonly called “forever chemicals.” A study led byJames Tour, the T.T. and W.F. Chao Professor of Chemistry and professor of materials science and nanoengineering, and graduate student Phelecia Scotland unveils a method that not only eliminates PFAS from water systems but also transforms waste into high-value graphene, offering a cost-effective and sustainable approach to environmental remediation. This research was published March 31 in Nature Water.
[…]
“Our method doesn’t just destroy these hazardous chemicals; it turns waste into something of value,” Tour said. “By upcycling the spent carbon into graphene, we’ve created a process that’s not only environmentally beneficial but also economically viable, helping to offset the costs of remediation.”
The research team’s process employs flash joule heating (FJH) to tackle these challenges. By combining granular activated carbon (GAC) saturated with PFAS and mineralizing agents like sodium or calcium salts, the researchers applied a high voltage to generate temperatures exceeding 3,000 degrees Celsius in under one second. The intense heat breaks down the strong carbon-fluorine bonds in PFAS, converting them into inert, nontoxic fluoride salts. Simultaneously, the GAC is upcycled into graphene, a valuable material used in industries ranging from electronics to construction.
The research results yielded more than 96% defluorination efficiency and 99.98% removal of perfluorooctanoic acid (PFOA), one of the most common PFAS pollutants. Analytical tests confirmed that the reaction produced undetectable amounts of harmful volatile organic fluorides, a common byproduct of other PFAS treatments. The method also eliminates the secondary waste associated with traditional disposal methods such as incineration or adding spent carbon to landfills.
[…]
The implications of this research extend beyond PFOA and perfluorooctane sulfonic acid, the two most studied PFAS; it even works on the most recalcitrant PFAS type, Teflon R. The high temperatures achieved during FJH suggest that this method could degrade a wide range of PFAS compounds, paving the way for broader water treatment and waste management applications. The FJH process can also be tailored to produce other valuable carbon-based materials, including carbon nanotubes and nanodiamonds, further enhancing its versatility and economic appeal.
“With its promise of zero net cost, scalability and environmental benefits, our method represents a step forward in the fight against forever chemicals,” Scotland said
What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?
Our newly published research introduces a way to create audible enclaves – localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.
The ability to send sound that becomes audible only at a specific location could transform entertainment, communication and spatial audio experiences.
[…]
The science of audible enclaves
We found a new way to send sound to one specific listener: through self-bending ultrasound beams and a concept called nonlinear acoustics.
Ultrasound refers to sound waves with frequencies above the human hearing range, or above 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate through many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.
[…]
Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.
This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.
Crucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces – specialized materials that manipulate sound waves – we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.
The key phenomenon at play is what’s called difference frequency generation. When two ultrasonic beams of slightly different frequencies, such as 40 kHz and 39.5 kHz, overlap, they create a new sound wave at the difference between their frequencies – in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.
This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.
[…]
This isn’t something that’s going to be on the shelf in the immediate future. For instance, challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue – converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.
Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient and personalized audio experiences.
We have developed a breakthrough method to convert carbon nanoparticles (CNPs) from vehicular emissions into high-performance electrocatalysts. This innovation provides a sustainable approach to pollution management and energy production by repurposing harmful particulate matter into valuable materials for renewable energy applications.
Our work, published in Carbon Neutralization, addresses both environmental challenges and the growing demand for efficient, cost-effective clean energy solutions.
Advancing electrocatalysis with multiheteroatom-doped CNPs
By doping CNPs with boron, nitrogen, oxygen and sulfur, we have significantly enhanced their catalytic performance. These multiheteroatom-doped nanoparticles exhibit remarkable efficiency in key electrochemical reactions. Our catalysts demonstrate high activity in the oxygen reduction reaction (ORR), which is essential for fuel cells and energy storage systems, as well as in the hydrogen evolution reaction (HER), a crucial process for hydrogen fuel production.
Additionally, they show superior performance in the oxygen evolution reaction (OER), advancing water splitting for green hydrogen generation. By optimizing the composition of these materials, we have created an effective alternative to conventional precious metal-based catalysts, improving both cost-efficiency and sustainability.
[..]
Our research has far-reaching implications for clean energy and sustainable transportation industries. These catalysts can be integrated into fuel cells, enabling more efficient power generation for electric vehicles and energy storage systems. They also play a vital role in hydrogen production, supporting the transition to a hydrogen-based economy. Additionally, their use in renewable energy storage systems enhances the stability of wind and solar power generation.
While our findings demonstrate significant promise, further research is needed to scale up production, optimize material stability, and integrate these catalysts into commercial applications
Temperatures, as defined by “climate”, are based on temperatures over longer periods of time — typically 20-to-30-year averages — rather than single-year data points. But even when based on longer-term averages, the world has still warmed by around 1.3°C.3
But you’ll also notice, in the chart, that temperatures haven’t increased linearly. There are spikes and dips along the long-run trend.
Many of these short-term fluctuations are caused by “ENSO” — the El Niño-Southern Oscillation — a natural climate cycle caused by changes in wind patterns and sea surface temperatures in the Pacific Ocean.
While it’s caused by patterns in the Pacific Ocean and most strongly affects countries in the tropics, it also impacts global temperatures and climate.
There are two key phases of this cycle: the La Niña phase, which tends to cause cooler global temperatures, and the El Niño phase, which brings hotter conditions. The world cycles between El Niño and La Niña phases every two to seven years.4 There are also “neutral” periods between these phases where the world is not in either extreme.
The zig-zag trend of global temperatures becomes understandable when you are taking the phases of the ENSO cycles into account. In the chart below, we see the data on global temperatures5, but the line is now colored by the ENSO phase at that time.6
The El Niño (warm phase) is shown in orange and red, and the La Niña (cold phase) is shown in blue.
You can see that temperatures often reach a short-term peak during warm El Niño years before falling back slightly as the world moves into La Niña years, shown in blue.
What’s striking is that global temperatures during recent La Niña years were warmer than El Niño years just a few decades before. “Cold years” today are hotter than “hot years” not too long ago.7
The best treatment for a hard knock on the head might someday involve a quick sniff of a nasal spray. Researchers have found early evidence in mice that an antibody-based treatment delivered up the nose can reduce the brain damage caused by concussions and more serious traumatic injuries.
Scientists at Mass General Brigham conducted the study, published Thursday in Nature Neuroscience. In brain-injured mice, the experimental spray appeared to improve the brain’s natural acute healing process while also reducing damaging inflammation later on. The findings could lead to a genuine prophylactic against the long-term impacts of traumatic brain injuries and other conditions like stroke, the researchers say.
[…]
Foralumab, developed by the company Tiziana Life Sciences, targets a specific group of proteins that interact with the brain’s immune cells, called CD3. This suppression of CD3, the team’s earlier work has suggested, increases the activity of certain immune cells known as regulatory T cells (Treg). As the name implies, these cells help regulate the brain’s immune response to make sure it doesn’t go haywire.
[…]
n their latest mice study, the researchers found that foralumab—via the increased activity of Treg cells—improved aspects of the brain’s immediate healing from a traumatic injury. The dosed mice’s microglia (the brain’s unique first line of immune defense) became better at eating and cleaning up after damaged cells, for instance. Afterward, the drug also appeared to prevent microglia from becoming chronically inflamed, As a result, relative to mice in a control group, mice treated with foralumab up to three days post-injury experienced greater improvements in their motor function and coordination.
What if your electronic devices could adapt on the fly to temperature, pressure, or impact? Thanks to a new breakthrough in downsizing quantum materials, that idea is becoming a reality.
In an article published this month in Applied Physics Express, a multi-institutional research team led by Osaka University announced that they have successfully synthesized an ultrathin vanadium dioxide film on a flexible substrate, in a way that preserves the film’s electrical properties.
Vanadium dioxide is well known in the scientific community for its ability to transition between conductor and insulator phases at nearly room temperature. This phase transition underpins smart and adaptable electronics that can adjust to their environment in real time. But there is a limit to how thin vanadium dioxide films can be, because making a material too small affects its ability to conduct or insulate electricity.
“Ordinarily, when a film is placed on a hard substrate, strong surface forces interfere with the atomic structure of the film and degrade its conductive properties,” explains Boyuan Yu, lead author of the study.
To overcome this limitation, the team prepared their films on two-dimensional hexagonal boron nitride (hBN) crystals; hBN is a highly stable soft material that does not have strong bonds with oxides and thus does not excessively strain the film or spoil its delicate structure.
“The results are truly surprising,” says Hidekazu Tanaka, senior author. “We find that by using this soft substrate, the material structure is very nearly unaffected.”
By performing precise spectroscopy measurements, the team was able to confirm that the phase transition temperature of their vanadium dioxide layers remained essentially unchanged, even at thicknesses as thin as 12 nm.
“This discovery significantly improves our ability to manipulate quantum materials in practical ways,” says Yu. “We have gained a new level of control over the transition process, which means we can now tailor these materials to specific applications like sensors and flexible electronics.”
Given that quantum materials like vanadium dioxide play a crucial role in the design of microsensors and devices, this discovery could pave the way for functional and adaptable electronics that can be attached anywhere. The research team is currently working on such devices, as well as exploring ways to incorporate even thinner films and substrates.
A robotic hand exoskeleton can help expert pianists learn to play even faster by moving their fingers for them.
Robotic exoskeletons have long been used to rehabilitate people who can no longer use their hands due to an injury or medical condition, but using them to improve the abilities of able-bodied people has been less well explored.
Now, Shinichi Furuya at Sony Computer Science Laboratories in Tokyo and his colleagues have found that a robotic exoskeleton can improve the finger speed of trained pianists after a single 30-minute training session.
[…]
The robotic exoskeleton can raise and lower each finger individually, up to four times a second, using a separate motor attached to the base of each finger.
To test the device, the researchers recruited 118 expert pianists who had all played since before they had turned 8 years old and for at least 10,000 hours, and asked them to practise a piece for two weeks until they couldn’t improve.
Then, the pianists received a 30-minute training session with the exoskeleton, which moved the fingers of their right hand in different combinations of simple and complex patterns, either slowly or quickly, so that Furuya and his colleagues could pinpoint what movement type caused improvement.
The pianists who experienced the fast and complex training could better coordinate their right hand movements and move the fingers of either hand faster, both immediately after training and a day later. This, together with evidence from brain scans, indicates that the training changed the pianists’ sensory cortices to better control finger movements in general, says Furuya.
“This is the first time I’ve seen somebody use [robotic exoskeletons] to go beyond normal capabilities of dexterity, to push your learning past what you could do naturally,” says Nathan Lepora at the University of Bristol, UK. “It’s a bit counterintuitive why it worked, because you would have thought that actually performing the movements yourself voluntarily would be the way to learn, but it seems passive movements do work.”
You can probably complete an amazing number of tasks with your hands without looking at them. But if you put on gloves that muffle your sense of touch, many of those simple tasks become frustrating. Take away proprioception — your ability to sense your body’s relative position and movement — and you might even end up breaking an object or injuring yourself.
[…]
Greenspon and his research collaborators recently published papers in Nature Biomedical Engineering and Science documenting major progress on a technology designed to address precisely this problem: direct, carefully timed electrical stimulation of the brain that can recreate tactile feedback to give nuanced “feeling” to prosthetic hands.
[…]
The researchers’ approach to prosthetic sensation involves placing tiny electrode arrays in the parts of the brain responsible for moving and feeling the hand. On one side, a participant can move a robotic arm by simply thinking about movement, and on the other side, sensors on that robotic limb can trigger pulses of electrical activity called intracortical microstimulation (ICMS) in the part of the brain dedicated to touch.
For about a decade, Greenspon explained, this stimulation of the touch center could only provide a simple sense of contact in different places on the hand.
“We could evoke the feeling that you were touching something, but it was mostly just an on/off signal, and often it was pretty weak and difficult to tell where on the hand contact occurred,” he said.
[…]
By delivering short pulses to individual electrodes in participants’ touch centers and having them report where and how strongly they felt each sensation, the researchers created detailed “maps” of brain areas that corresponded to specific parts of the hand. The testing revealed that when two closely spaced electrodes are stimulated together, participants feel a stronger, clearer touch, which can improve their ability to locate and gauge pressure on the correct part of the hand.
The researchers also conducted exhaustive tests to confirm that the same electrode consistently creates a sensation corresponding to a specific location.
“If I stimulate an electrode on day one and a participant feels it on their thumb, we can test that same electrode on day 100, day 1,000, even many years later, and they still feel it in roughly the same spot,” said Greenspon, who was the lead author on this paper.
[…]
The complementary Science paper went a step further to make artificial touch even more immersive and intuitive. The project was led by first author Giacomo Valle, PhD, a former postdoctoral fellow at UChicago who is now continuing his bionics research at Chalmers University of Technology in Sweden.
“Two electrodes next to each other in the brain don’t create sensations that ’tile’ the hand in neat little patches with one-to-one correspondence; instead, the sensory locations overlap,” explained Greenspon, who shared senior authorship of this paper with Bensmaia.
The researchers decided to test whether they could use this overlapping nature to create sensations that could let users feel the boundaries of an object or the motion of something sliding along their skin. After identifying pairs or clusters of electrodes whose “touch zones” overlapped, the scientists activated them in carefully orchestrated patterns to generate sensations that progressed across the sensory map.
Participants described feeling a gentle gliding touch passing smoothly over their fingers, despite the stimulus being delivered in small, discrete steps. The scientists attribute this result to the brain’s remarkable ability to stitch together sensory inputs and interpret them as coherent, moving experiences by “filling in” gaps in perception.
The approach of sequentially activating electrodes also significantly improved participants’ ability to distinguish complex tactile shapes and respond to changes in the objects they touched. They could sometimes identify letters of the alphabet electrically “traced” on their fingertips, and they could use a bionic arm to steady a steering wheel when it began to slip through the hand.
These advancements help move bionic feedback closer to the precise, complex, adaptive abilities of natural touch, paving the way for prosthetics that enable confident handling of everyday objects and responses to shifting stimuli.
[…]
“We hope to integrate the results of these two studies into our robotics systems, where we have already shown that even simple stimulation strategies can improve people’s abilities to control robotic arms with their brains,” said co-author Robert Gaunt, PhD, associate professor of physical medicine and rehabilitation and lead of the stimulation work at the University of Pittsburgh.
Greenspon emphasized that the motivation behind this work is to enhance independence and quality of life for people living with limb loss or paralysis.
Scientists have used high-energy particle collisions to peer inside protons, the particles that sit inside the nuclei of all atoms. This has revealed for the first time that quarks and gluons, the building blocks of protons, experience the phenomenon of quantum entanglement.
[…]
despite Einstein’s skepticism about entanglement, this “spooky” phenomenon has been verified over and over again. Many of those verifications have concerned testing increasing distances over which entanglement can be demonstrated. This new test took the opposite approach, investigating entanglement over a distance of just one quadrillionth of a meter, finding it actually occurs within individual protons.
The team found that the sharing of information that defines entanglement occurs across whole groups of fundamental particles called quarks and gluons within a proton.
[…]
To probe the inner structure of protons, scientists looked at high-energy particle collisions that have occurred in facilities like the Large Hadron Collider (LHC). When particles collide at extremely high speeds, other particles stream away from the collision like wreckage flung away from a crash between two vehicles.
This team used a technique developed in 2017 that applies quantum information science to electron-proton collisions to determine how entanglement influences the paths of particles streaming away. If quarks and gluons are entangled with protons, this technique says that should be revealed by the disorder, or “entropy,” seen in the sprays of daughter particles.
“Think of a kid’s messy bedroom, with laundry and other things all over the place,” Tu said. “In that disorganized room, the entropy is very high.”
The contrast to this is a low-entropy situation which is akin to a neatly tidied and sorted bedroom in which everything is organized in its proper place. A messy room indicates entanglement, if you will.
“For a maximally entangled state of quarks and gluons, there is a simple relation that allows us to predict the entropy of particles produced in a high-energy collision,” Brookhaven Lab theorist Dmitri Kharzeev said in the statement. “We tested this relation using experimental data.”
The interior of the Large Hadron Collider is within which protons and other particles are collided at high speeds. (Image credit: Robert Lea)
To investigate how “messy” particles get after a collision, the team first turned to data generated by proton-proton collisions conducted at the LHC. Then, in search of “cleaner” data, the researchers looked to electron-proton collisions carried out at the Hadron-Electron Ring Accelerator (HERA) particle collider from 1992 to 2007.
This data was delivered by the H1 team and its spokesperson as well as Deutsches Elektronen-Synchrotron (DESY) researcher Stefan Schmitt after a three-year search through HERA results.
Comparing HERA data with the entropy calculations, the team’s results matched their predictions perfectly, providing strong evidence that quarks and gluons inside protons are maximally entangled.
“Entanglement doesn’t only happen between two particles but among all the particles,” Kharzeev said. “Maximal entanglement inside the proton emerges as a consequence of strong interactions that produce a large number of quark-antiquark pairs and gluons.”
The revelation of maximal entanglement of quarks and gluons within protons could help reveal what keeps these fundamental particles bound together with the building blocks of atomic nuclei.
The Pew survey found 76 percent of respondents voicing “a great deal or fair amount of confidence in scientists to act in the public’s best interests.” That’s up a bit from last year, but still down from prepandemic measures, to suggest that an additional one in 10 Americans has lost confidence in scientists since 2019.
The Pew survey’s results, however, show this propaganda worked on some Republican voters. The drop in public confidence in science the survey reports is almost entirely contained to that circle, plunging from 85 percent approvalamong Republican votersin April of 2020 to 66 percent now. It hardly budged for those not treated to nightly doses of revisionist history in an echo chamber—where outlets pretended thatmasking, schooland business restrictions, and vaccines, weren’t necessitiesin staving off a deadly new disease. Small wonder that Republican voters’excess death rates were 1.5 timesthose among Democrats after COVID vaccines appeared.
Amanda Montañez; Source: Pew Research Center
Instead of noting the role of this propaganda in their numbers, Pew’s statement about the survey pointed only to perceptions that scientists aren’t “good communicators,” held by 52 percent of respondents, and the 47 percent who said, “research scientists feel superior to others” in the survey.
[…]
it matches the advice in a December NASEM report on scientific misinformation: “Scientists, medical professionals, and health professionals who choose to take on high profile roles as public communicators of science should understand how their communications may be misinterpreted in the absence of context or in the wrong context.” This completely ignores the deliberate misinterpretation of science to advance political aims, the chief kind of science misinformation dominating the modern public sphere.
It isn’t a secret what is going on: Oil industry–funded lawmakers and other mouthpieces have similarly vilified climate scientists for decades to stave off paying the price for global warming. A study published in 2016 in the American Sociological Review concluded that the U.S. public’s slow erosion of trust in science from 1974 to 2010 was almost entirely among conservatives. Such conservatives had adopted “limited government” politics, which clashes with science’s “fifth branch” advisory role in setting regulations—seen most clearly in the FDA resisting Trump’s calls for wholesale approval of dangerous drugs to treat COVID. That flavor of politics made distrust for scientists the collateral damage of the half-century-long attack on regulation. The utter inadequacy of an unscientific, limited-government response to the 2020 pandemic only primed this resentment—fanned by hate aimed at Fauci—to deliver the dent in trust for science we see today.
[…] Traditionally, entanglement is achieved through local interactions or via entanglement swapping, where entanglement at a distance is generated through previously established entanglement and Bell-state measurements. However, the precise requirements enabling the generation of quantum entanglement without traditional local interactions remain less explored. Here, we demonstrate that independent particles can be entangled without the need for direct interaction, prior established entanglement, or Bell-state measurements, by exploiting the indistinguishability of the origins of photon pairs. Our demonstrations challenge the long-standing belief that the prior generation and measurement of entanglement are necessary prerequisites for generating entanglement between independent particles that do not share a common past. In addition to its foundational interest, we show that this technique might lower the resource requirements in quantum networks, by reducing the complexity of photon sources and the overhead photon numbers.
Imagine if scientists could grab virus particles the same way we pick up a tennis ball or a clementine, and prevent them from infecting cells. Well, scientists in Illinois have built a microscopic four-fingered hand to do just that.
A team of scientists, led by Xing Wang of the University of Illinois Urbana-Champaign, has created a tiny hand, dubbed the NanoGripper, from a single piece of folded DNA that can grab covid-19 particles. Their findings, detailed in a November 27 study published in the journal Science Robotics, demonstrate that the hand can conduct a rapid test to identify the virus as well as prevent the particles from infecting healthy cells. Although the study focused specifically on the covid-19 virus, the results have important implications for numerous medical conditions.
“We wanted to make a soft material, nanoscale robot with grabbing functions that never have been seen before, to interact with cells, viruses and other molecules for biomedical applications,” Wang said in a university statement. “We are using DNA for its structural properties. It is strong, flexible and programmable. Yet even in the DNA origami field, this is novel in terms of the design principle. We fold one long strand of DNA back and forth to make all of the elements, both the static and moving pieces, in one step.”
The NanoGripper has four jointed fingers and a palm. The fingers are programmed to attach to specific targets—in the case of covid-19, the virus’ infamous spike protein—and close their grip around them. According to the study, when the researchers exposed cells with NanoGrippers to covid-19, the hands’ gripping mechanisms prevented the viral spike proteins from infecting the cells.
“It would be very difficult to apply it after a person is infected, but there’s a way we could use it as a preventive therapeutic,” Wang explained. “We could make an anti-viral nasal spray compound. The nose is the hot spot for respiratory viruses, like covid or influenza. A nasal spray with the NanoGripper could prevent inhaled viruses from interacting with the cells in the nose.”
The hand is also decked with a unique sensor that detects covid-19 in 30 minutes with the accuracy of the now-familiar qPCR molecular tests used in hospitals.
“When the virus is held in the NanoGripper’s hand, a fluorescent molecule is triggered to release light when illuminated by an LED or laser,” said Brian Cunningham, one of Wang’s colleagues on the study, also from the University of Illinois Urbana-Champaign. “When a large number of fluorescent molecules are concentrated upon a single virus, it becomes bright enough in our detection system to count each virus individually.”
Like a true Swiss army knife, scientists could modify the NanoGripper to potentially detect and grab other viruses, including HIV, influenza, or hepatitis B, as detailed in the study. The NanoGripper’s “wrist side” could also attach to another biomedical tool for additional functions, such as targeted drug delivery.
Wang, however, is thinking even bigger than viruses: cancer. The fingers could be programmed to target cancer cells the same way they currently identify covid-19’s spike proteins, and then deliver focused cancer-fighting treatments.
“Of course it would require a lot of testing, but the potential applications for cancer treatment and the sensitivity achieved for diagnostic applications showcase the power of soft nanorobotics,” Wang concluded.
Here’s to hoping NanoGrippers might give scientists the ability to grab the next pandemic by the nanoballs.
From 23% less in the northern Alps to a decrease of almost 50% on the southwestern slopes: Between 1920 and 2020, snowfall across the entirety of the Alps has decreased on average by a significant 34%. The results come from a study coordinated by Eurac Research and were published in the International Journal of Climatology. The study also examines how much altitude and climatological parameters such as temperature and total precipitation impact on snowfall.
The data on seasonal snowfall and rainfall was collected from 46 sites throughout the Alps, the most recent of which was collected from modern weather stations, and the historical data was gathered from handwritten records in which specially appointed observers recorded how many inches of snow were deposited at a given location.
[…]
“The most negative trends concern locations below an altitude of 2,000 meters and are in the southern regions such as Italy, Slovenia and part of the Austrian Alps.
In the Alpine areas to the north such as Switzerland and northern Tyrol, the research team observed the extent to which altitude also plays a central role. Although there has been an increase in precipitation during the winter seasons, at lower altitudes, snowfall has increasingly turned to rain as temperatures have risen. At higher elevations, however, thanks to sufficiently cold temperatures, snowfall is being maintained. In the southwestern and southeastern areas, temperatures have risen so much that even at higher elevations, rain is frequently taking over snowfall.
Misinformation can lead to socially detrimental behavior, which makes finding ways to combat its effects a matter of crucial public concern. A new paper by researchers at the Annenberg Public Policy Center (APPC) in the Journal of Experimental Psychology: General explores an innovative approach to countering the impact of factually incorrect information called “bypassing,” and finds that it may have advantages over the standard approach of correcting inaccurate statements.
“The gold standard for tackling misinformation is a correction that factually contradicts the misinformation” by directly refuting the claim […]
in the study “Bypassing versus correcting misinformation: Efficacy and fundamental processes.” Corrections can work, but countering misinformation this way is an uphill battle: people don’t like to be contradicted, and a belief, once accepted, can be difficult to dislodge.
Bypassing works differently. Rather than directly addressing the misinformation, this strategy involves offering accurate information that has an implication opposite to that of the misinformation. For example, faced with the factually incorrect statement “genetically modified foods have health risks,” a bypassing approach might highlight the fact that genetically modified foods help the bee population. This counters the negative implication of the misinformation with positive implications, without taking the difficult path of confrontation
[…]
“bypassing can generally be superior to correction, specifically in situations when people are focused on forming beliefs, but not attitudes, about the information they encounter.” This is because “when an attitude is formed, it serves as an anchor for a person’s judgment of future claims. When a belief is formed, there is more room for influence, and a bypassing message generally exerts more.”
[…]
“bypassing can generally be superior to correction, specifically in situations when people are focused on forming beliefs, but not attitudes, about the information they encounter.” This is because “when an attitude is formed, it serves as an anchor for a person’s judgment of future claims. When a belief is formed, there is more room for influence, and a bypassing message generally exerts more.”