Growing cartilage tissue in the lab could help patiens with injuries, but it is very hard to make the tissue grow in exactly the right shape. A new approach could solve this problem: Tiny spherical containers are created with a high-resolution 3D printer. These containers are then filled with cells and assembled into the desired shape. The cells from different containers connect, the container itself is degradable and eventually disappears.
[…]
A special high-resolution 3D printing process is used to create tiny, porous spheres made of biocompatible and degradable plastic, which are then colonized with cells. These spheroids can then be arranged in any geometry, and the cells of the different units combine seamlessly to form a uniform, living tissue. Cartilage tissue, with which the concept has now been demonstrated at TU Wien, was previously considered particularly challenging in this respect.
Tiny spherical cages as a scaffold for the cells
“Cultivating cartilage cells from stem cells is not the biggest challenge. The main problem is that you usually have little control over the shape of the resulting tissue,”
[…]
To prevent this, the research team at TU Wien is working with a new approach: specially developed laser-based high-resolution 3D printing systems are used to create tiny cage-like structures that look like mini footballs and have a diameter of just a third of a millimeter. They serve as a support structure and form compact building blocks that can then be assembled into any shape.
Stem cells are first introduced into these football-shaped mini-cages, which quickly fill the tiny volume completely.
[…]
The team used differentiated stem cells — i.e. stem cells that can no longer develop into any type of tissue, but are already predetermined to form a specific type of tissue, in this case cartilage tissue.
[…]
The tiny 3D-printed scaffolds give the overall structure mechanical stability while the tissue continues to mature. Over a period of a few months, the plastic structures degrade, they simply disappear, leaving behind the finished tissue in the desired shape.
First step towards medical application
In principle, the new approach is not limited to cartilage tissue, it could also be used to tailor different kinds of larger tissues such as bone tissue. However, there are still a few tasks to be solved along the way — after all, unlike in cartilage tissue, blood vessels would also have to be incorporated for these tissues above a certain size.
“An initial goal would be to produce small, tailor-made pieces of cartilage tissue that can be inserted into existing cartilage material after an injury,” says Oliver Kopinski-Grünwald. “In any case, we have now been able to show that our method for producing cartilage tissue using spherical micro-scaffolds works in principle and has decisive advantages over other technologies.”
Researchers have profiled the entire immune system in young children to compare their response to SARS-CoV-2 with that of adults. The results, published in Cell, show that infants’ systems mount a strong innate response in their noses, where the airborne virus usually enters the body. And unlike adults, babies don’t exhibit widespread inflammatory signaling throughout their circulatory system, perhaps preventing severe COVID.
The research team, led by Stanford Medicine immunologist Bali Pulendran, took blood samples from 81 infants (54 of whom became infected with the virus between one month and three years of age) and dozens of adults. The researchers also took weekly nasal swabs from kids and adults with and without COVID. They then analyzed proteins and gene activity in these samples to track participants’ innate and adaptive immune responses to the virus. “This sort of longitudinal mapping of the immune response of infants, to any virus, had not been done before,” Pulendran says.
The team found stark differences between children and adults in both adaptive and innate immune responses. Infected infants’ noses were flooded with inflammatory signaling molecules and cells. But unlike in the adults, there were no signs of inflammation in their blood.
[…]
Even without a widespread innate response, young children had surprisingly long-lasting levels of SARS-specific antibodies in their blood, Pulendran says. Future research revealing how these innate and adaptive responses are linked could eventually help improve nasally delivered vaccines for children and, potentially, adults.
A crucial question remains: What makes SARS-CoV-2 different from other respiratory viruses, such as influenza and respiratory syncytial virus, which are more deadly for infants?
Delivering medication to the lungs with inhalable nanoparticles may help treat chronic obstructive pulmonary disease (COPD). In mice with signs of the condition, the treatment improved lung function and reduced inflammation.
COPD causes the lungs’ airways to become progressively narrower and more rigid, obstructing airflow and preventing the clearance of mucus. As a result, mucus accumulates in the lungs, attracting bacterial pathogens that further exacerbate the disease.
This thick mucus layer also traps medications, making it challenging to treat infections. So, Junliang Zhu at Soochow University in China and his colleagues developed inhalable nanoparticles capable of penetrating mucus to deliver medicine deep within the lungs.
The researchers constructed the hollow nanoparticles from porous silica, which they filled with an antibiotic called ceftazidime. A shell of negatively charged compounds surrounding the nanoparticles blocked off pores, preventing antibiotic leakage. This negative charge also helps the nanoparticles penetrate mucus. Then, the slight acidity of the mucus transforms the shells’ charge from negative to positive, opening up pores and releasing the medication.
The researchers used an inhalable spray containing the nanoparticles to treat a bacterial lung infection in six mice with signs of COPD. An equal number of animals received only the antibiotic.
On average, mice treated with the nanoparticles had about 98 per cent less pathogenic bacteria inside their lungs than those given just the antibiotic. They also had fewer inflammatory molecules in their lungs and lower carbon dioxide in their blood, indicating better lung function.
These findings suggest the nanoparticles could improve drug delivery in people with COPD or other lung conditions like cystic fibrosis where thick mucus makes it difficult to treat infections, says Vincent Rotello at the University of Massachusetts Amherst, who wasn’t involved in the study. However, it is unclear if these nanoparticles are cleared by lungs. “If you have a delivery system that builds up over time, that would be problematic,” he says.
The rainbow looks different to a human than it does to a honeybee or a zebra finch. That’s because these animals can see colors that we humans simply can’t. Now scientists have developed a new video recording and analysis technique to better understand how the world looks through the eyes of other species. The accurate and relatively inexpensive method, described in a study published on January 23 in PLOS Biology, is already offering biologists surprising discoveries about the lives of different species.
Humans have three types of cone cells in their eyes. This trio of photoreceptors typically detects red, green and blue wavelengths of light, which combine into millions of distinct colors in the spectrum from 380 to 700 nanometers in wavelength—what we call “visible light.” Some animals, though, can see light with even higher frequencies, called ultraviolet, or UV, light. Most birds have this ability, along with honeybees, reptiles and certain bony fish.
[…]
To capture animal vision on video, Vasas and her colleagues developed a portable 3-D-printed enclosure containing a beam splitter that separates light into UV and the human-visible spectrum. The two streams are captured by two different cameras. One is a standard camera that detects visible-wavelength light, and the other is a modified camera that is sensitive to UV. On its own, the UV-sensitive camera wouldn’t be able to record detailed information on the rest of the light spectrum in a single shot. But paired together, the two cameras can simultaneously record high-quality video that encompasses a wide range of the light spectrum. Then a set of algorithms aligns the two videos and produces versions of the footage that are representative of different animals’ color views, such as those of birds or bees.
[…]
Capturing video in this way “fills a really important gap in our ability to model animal vision,” says Jolyon Troscianko, a visual ecologist at the University of Exeter in England, who wasn’t involved in the new research. He notes that in nature, “a lot of interesting things move,” such as animals that are engaging in mating dances or rapid defense displays. Until now, researchers studying these dynamic behaviors have been stuck with the human perspective.
[…]
The technique is already revealing unseen phenomena of the natural world, she adds: for example, by recording an iridescent peacock feather rotating under a light, the researchers found shifts in color that are even more vibrant to fellow peafowl than they are to humans. Vasas and her colleagues also captured the brief startle display of a black swallowtail caterpillar and saw for the first time that its hornlike defense appendages are UV-reflective.
“None of these things were hypotheses that we had in advance,” Vasas says. Moving forward, “I think it will reveal a lot of things that I can’t yet imagine.”
The humble membranes that enclose our cells have a surprising superpower: They can push away nano-sized molecules that happen to approach them. A team including scientists at the National Institute of Standards and Technology (NIST) has figured out why, by using artificial membranes that mimic the behavior of natural ones. Their discovery could make a difference in how we design the many drug treatments that target our cells.
The team’s findings, which appear in the Journal of the American Chemical Society, confirm that the powerful electrical fields that cell membranes generate are largely responsible for repelling nanoscale particles from the surface of the cell.
This repulsion notably affects neutral, uncharged nanoparticles, in part because the smaller, charged molecules the electric field attracts crowd the membrane and push away the larger particles. Since many drug treatments are built around proteins and other nanoscale particles that target the membrane, the repulsion could play a role in the treatments’ effectiveness.
The findings provide the first direct evidence that the electric fields are responsible for the repulsion.
[…]
Membranes form boundaries in nearly all kinds of cells. Not only does a cell have an outer membrane that contains and protects the interior, but often there are other membranes inside, forming parts of organelles such as mitochondria and the Golgi apparatus. Understanding membranes is important to medical science, not least because proteins lodged in the cell membrane are frequent drug targets. Some membrane proteins are like gates that regulate what gets into and out of the cell.
The region near these membranes can be a busy place. Thousands of types of different molecules crowd each other and the cell membrane—and as anyone who has tried to push through a crowd knows, it can be tough going. Smaller molecules such as salts move with relative ease because they can fit into tighter spots, but larger molecules, such as proteins, are limited in their movements.
[…]
“How does crowding affect the cell and its behavior?” he said. “How, for example, do molecules in this soup get sorted inside the cell, making some of them available for biological functions, but not others? The effect of the membrane could make a difference.”
[…]
scientists have paid scant attention to this effect at the nanoscale because it takes extremely powerful fields to move nanoparticles. But powerful fields are just what an electrically charged membrane generates.
“The electric field right near a membrane in a salty solution like our bodies produce can be astoundingly strong,” Hoogerheide said. “Its strength falls off rapidly with distance, creating large field gradients that we figured might repel nearby particles. So we used neutron beams to look into it.”
Neutrons can distinguish between different isotopes of hydrogen, and the team designed experiments that explored a membrane’s effect on nearby molecules of PEG, a polymer that forms chargeless nano-sized particles. Hydrogen is a major constituent of PEG, and by immersing the membrane and PEG into a solution of heavy water—which is made with deuterium in place of ordinary water’s hydrogen atoms—the team could measure how closely the PEG particles approached the membrane. They used a technique known as neutron reflectometry at the NCNR as well as instruments at Oak Ridge National Laboratory.
Together with molecular dynamics simulations, the experiments revealed the first-ever evidence that the membranes’ powerful field gradients were the culprit behind the repulsion: The PEG molecules were more strongly repelled from charged surfaces than from neutral surfaces.
[…]
More information: Marcel Aguilella-Arzo et al, Charged Biological Membranes Repel Large Neutral Molecules by Surface Dielectrophoresis and Counterion Pressure, Journal of the American Chemical Society (2024). DOI: 10.1021/jacs.3c12348. pubs.acs.org/doi/full/10.1021/jacs.3c12348
Textbook models will need to be re-drawn after a team of researchers found that water molecules at the surface of salt water are organised differently than previously thought.
Many important reactions related to climate and environmental processes take place where water molecules interface with air. For example, the evaporation of ocean water plays an important role in atmospheric chemistry and climate science. Understanding these reactions is crucial to efforts to mitigate the human effect on our planet.
The distribution of ions at the interface of air and water can affect atmospheric processes. However, a precise understanding of the microscopic reactions at these important interfaces has so far been intensely debated.
In a paper published today in the journal Nature Chemistry, researchers from the University of Cambridge and the Max Planck Institute for Polymer Research in Germany show that ions and water molecules at the surface of most salt-water solutions, known as electrolyte solutions, are organised in a completely different way than traditionally understood. This could lead to better atmospheric chemistry models and other applications.
[…]
The combined results showed that both positively charged ions, called cations, and negatively charged ions, called anions, are depleted from the water/air interface. The cations and anions of simple electrolytes orient water molecules in both up- and down-orientation. This is a reversal of textbook models, which teach that ions form an electrical double layer and orient water molecules in only one direction.
Co-first author Dr Yair Litman, from the Yusuf Hamied Department of Chemistry, said: “Our work demonstrates that the surface of simple electrolyte solutions has a different ion distribution than previously thought and that the ion-enriched subsurface determines how the interface is organised: at the very top there are a few layers of pure water, then an ion-rich layer, then finally the bulk salt solution.”
Co-first author Dr Kuo-Yang Chiang of the Max Planck Institute said: “This paper shows that combining high-level HD-VSFG with simulations is an invaluable tool that will contribute to the molecular-level understanding of liquid interfaces.”
Professor Mischa Bonn, who heads the Molecular Spectroscopy department of the Max Planck Institute, added: “These types of interfaces occur everywhere on the planet, so studying them not only helps our fundamental understanding but can also lead to better devices and technologies. We are applying these same methods to study solid/liquid interfaces, which could have potential applications in batteries and energy storage.”
[…] few have looked at the effects on wildlife at the population level. Enter Erik Katovich, an economist at the University of Geneva. Dr Katovich made use of the Christmas Bird Count, a citizen-science project run by the National Audubon Society, an American non-profit outfit. Volunteers count birds they spot over Christmas, and the society compiles the numbers. Its records stretch back over a century.
Dr Katovich assumed, reasonably, that if wind turbines harmed bird populations, then the numbers seen in the Christmas Bird Count would drop in places where new turbines had been built. He combined bird population and species maps with the locations and construction dates of all wind turbines in the United States, with the exceptions of Alaska and Hawaii, between 2000 and 2020. He found that building turbines had no discernible effect on bird populations. That reassuring finding held even when he looked specifically at large birds like hawks, vultures and eagles that many people believe are particularly vulnerable to being struck.
But Dr Katovich did not confine his analysis to wind power alone. He also examined oil-and-gas extraction.
[…]
Comparing bird populations to the locations of new gas wells revealed an average 15% drop in bird numbers when new wells were drilled, probably due to a combination of noise, air pollution and the disturbance of rivers and ponds that many birds rely upon. When drilling happens in places designated by the National Audubon Society as “important bird areas”, bird numbers instead dropped by 25%. Such places are typically migration hubs, feeding grounds or breeding locations.
Wind power, in other words, not only produces far less planet-heating carbon dioxide and methane than do fossil fuels. It appears to be significantly less damaging to wildlife, too. Yet that is not the impression you would get from reading the news. Dr Katovich found 173 stories in major American news outlets reporting the supposed negative effects that wind turbines have on birds in 2020, compared with only 46 stories discussing the effects of oil-and-gas wells. Wind turbines might look dramatic. But their effect on birds is not.
A new study published in The Astrophysical Journal reveals new evidence for standard gravity breaking down in an idiosyncratic manner at low acceleration. This new study reinforces the evidence for modified gravity that was previously reported in 2023 from an analysis of the orbital motions of gravitationally bound, widely separated (or long-period) binary stars, known as wide binaries.
The new study was carried out by Kyu-Hyun Chae, a professor of physics and astronomy at Sejong University in Seoul, South Korea, with wide binaries observed by European Space Agency’s Gaia space telescope.
Gravitational anomalies reported in 2023 by Chae’s study of wide binaries have the unique feature that orbital motions in binaries experience larger accelerations than Newtonian predictions when the mutual gravitational acceleration is weaker than about 1 nanometer per second squared and the acceleration boost factor becomes about 1.4 at accelerations lower than about 0.1 nanometer per second squared.
This elevated acceleration in wide binaries cannot be explained by invoking the undetected dark matter because the required dark matter density is out of the question based on galactic dynamics and cosmological observations.
Remarkably, the elevated acceleration agrees well with what MOND (modified Newtonian dynamics)-type modified gravity theories such as AQUAL predict under the external field effect of the Milky Way. The MOND paradigm was suggested by physicist Mordehai Milgrom and the AQUAL theory was formulated by him and the late physicist Jacob Bekenstein 40 years ago.
Because gravitationally-bound astrophysical systems such as galaxies and galaxy clusters and the universe itself are governed by gravity, the breakdown of standard gravity at low acceleration has profound implications for astrophysics and cosmology.
[…]
Chae conservatively selected up to 2,463 pure binaries, which are less than 10% of the sample used in the earlier study. Since the expected fraction of pure binaries among apparently binary systems is at least 50%, this much lower fraction means that the selection was sufficiently strict.
Chae applied two algorithms to test gravity from the sample of pure binaries. In one algorithm that was originally developed from the earlier work for general or “impure” samples, he used a Monte Carlo method to calculate (the probability distribution of) the observed kinematic acceleration, defined by relative velocity squared over the physical separation in the real three-dimensional space, as a function of the Newtonian gravitational acceleration between the two stars and then compared it with the corresponding Newtonian prediction of the kinematic acceleration.
In the other algorithm that is simpler and suitable for pure binaries, Chae compared the observed distribution of the sky-projected relative velocities between the two stars with respect to the sky-projected separations with the Newton-predicted distribution through a Monte Carlo method.
Both algorithms produce consistent results that agree well with the gravitational anomaly reported earlier.
[…]
However, the observed acceleration or relative velocity starts to deviate from the Newtonian prediction at a separation of about 2,000 au (astronomical units) and acceleration of about 1 nanometer per second squared. Then, there is a nearly constant boost of about 40 to 50% in acceleration or 20% boost in relative velocity at separation greater than about 5,000 au or acceleration lower than about 0.1 nanometer per second squared, up to the probed limit of about 20,000 au or 0.01 nanometer per second squared.
Chae’s new results agree well with an independent result by Xavier Hernandez’s group that is coincidentally in the production stage at present. This is significant because Hernandez’s group selected their sample completely independent of Chae’s selection and they used an independent algorithm (different from Chae’s two algorithms) based on the full distribution of relative velocities for their pure wide binary pairs.
[…]
Chae also points out that this new sample is explicitly free from any concerns of data quality cuts that have been raised in the literature so far. Chae further clarifies the recent contradicting claim by Indranil Banik and co-authors, saying, “Their methodology and results have a lot of problems. Their conclusion is invalid for two main reasons among others.”
“In their sample selection they knowingly excluded Newtonian-regime binaries that are crucial in accurately calibrating the occurrence rate of systems containing hidden additional component(s). Then, they employed a specific statistical algorithm of modeling velocities to infer gravity, the occurrence rate, and other parameters simultaneously, but ignored velocity errors though vital for their algorithm.”
Chae concludes, “At least three independent quantitative analyses by two independent groups reveal essentially the same gravitational anomaly. The gravitational anomaly is real, and a new scientific paradigm shift is on its way.”
The observed gravitational anomaly is remarkably well consistent with the MOND-type (Milgromian) gravity phenomenology. However, underlying theoretical possibilities encompassing the MOND-type gravity phenomenology are open at present, and this may be welcome news to theoretical physicists and mathematicians.
[…]
More information: Kyu-Hyun Chae, Robust Evidence for the Breakdown of Standard Gravity at Low Acceleration from Statistically Pure Binaries Free of Hidden Companions, The Astrophysical Journal (2024). DOI: 10.3847/1538-4357/ad0ed5
Curceanu hopes the apparatus and methods of nuclear physics can solve the century-old mystery of why lentils – and other organisms too – constantly emit an extremely weak dribble of photons, or particles of light. Some reckon these “biophotons” are of no consequence. Others insist they are a subtle form of lentil communication. Curceanu leans towards the latter camp – and she has a hunch that the pulses between the pulses might even contain secret quantum signals. “These are only the first steps, but it looks extremely interesting,” she says.
There are already hints that living things make use of quantum phenomena, with inconclusive evidence that they feature in photosynthesis and the way birds navigate, among other things. But lentils, not known for their complex behaviour, would be the most startling example yet of quantum biology, says Michal Cifra at the Czech Academy of Sciences in Prague. “It would be amazing,” says Cifra. “If it’s true.” Since so many organisms emit biophotons, such a discovery might indicate that quantum effects are ubiquitous in nature.
Biophotons
Biophotons have had scientists stumped for precisely a century. In 1923, biologist Alexander Gurwitsch was studying how plant cells divide by placing onion roots near each other. The closer the roots were, the more cell division occurred, suggesting there was some signal alerting the roots to their neighbour’s presence.
[…]
To tease out how the onion roots were signalling, Gurwitsch repeated the experiment with all manner of physical barriers between the roots. Wood, metal, glass and even gelatine dampened cell division to the same level seen in single onion roots. But, to Gurwitsch’s surprise, a quartz divider had no effect. Compared to glass, quartz allows far more ultraviolet rays to pass through. Some kind of weak emission of UV radiation, he concluded, must be responsible.
[…]
Living organisms have long been known to communicate using light. Jellyfish, mushrooms and fireflies, to name just a few, glow or emit bright flashes to ward off enemies or attract a mate. But these obvious signals, known as bioluminescence, are different to the effect Gurwitsch had unearthed. Biophotons are “a very low-intensity light, not visible to the naked eye”, says Curceanu’s collaborator Maurizio Benfatto. In fact, biophotons were so weak that it took until 1954 to develop equipment sensitive enough to decisively confirm Gurwitsch’s idea.
Since then, dozens of research groups have reported cases of biophoton emission having a useful function in plants and even animals. Like onion roots, yeast cells are known to influence the growth rate of their neighbours. And in 2022, Zsolt PÓnya and Katalin Somfalvi-TÓth at the University of Kaposvár in Hungary observed biophotons being emitted by sunflowers when they were put under stress, which the researchers hoped to use to precisely monitor these crops. Elsewhere, a review carried out by Roeland Van Wijk and Eduard Van Wijk, now at the research company MELUNA in the Netherlands, suggested that biophotons may play a role in various human health conditions, from ageing to acne.
There is a simple explanation for how biophotons are created, too. During normal metabolism, chemical reactions in cells end up converting biomolecules to what researchers called an excited state, where electrons are elevated to higher energy levels. Those electrons then naturally drop to their ground state and emit a photon in the process. Because germinating seeds, like lentils, burn energy quickly to grow, they emit more biophotons.
Today, no one doubts that biophotons exist. Rather, the dispute is over whether lentils and other organisms have harnessed biophotons in a useful way.
[…]
We know that plants communicate using chemicals and sometimes even emit ultrasonic squeaks when stressed. This allows them to control their growth, warn each other about invading insects and attract pollinators. We also know they have ways of detecting and responding to photons in the form of regular sunlight. “Biological systems can detect photons and have feedback loops based on that,”
[…]
Curceanu and Benfatto are hoping that the application of serious physics equipment to this problem could finally let us eavesdrop on the legume’s secrets. They typically use supersensitive detectors to probe the foundations of reality. Now, they are applying these to a box of 75 lentil seeds – they need that many because if they used any fewer, the biophoton signals would be too weak.
[…]
Years ago, Benfatto came across a paper on biophotons and noticed there appeared to be patterns in the way they were produced. The intensity would swell, then fall away, almost like music. This gave him the idea of applying a method from physics called diffusion entropy analysis to investigate these patterns. The method provides a means of characterising the mathematical structures that underlie complex patterns. Imagine comparing a simple drumbeat with the melody of a pop song, for example – the method Benfatto wanted to apply could quantify the complexity embodied in each.
To apply this to the lentils, Benfatto, Curceanu and their colleagues put their seeds in a black box that shielded them from interference. Outside the box, they mounted an instrument capable of detecting single biophotons. They also had rotating filters that allowed them to detect photons with different wavelengths. All that remained was to set the lentils growing. “We add water and then we wait,” says Benfatto.
In 2021, they unveiled their initial findings. It turned out that the biophotons’ signals changed significantly during the lentils’ germination. During the first phase, the photons were emitted in a pattern that repeatedly reset, like a piece of music changing tempo. Then, during the second phase, the emissions took the form of another kind of complex pattern called fractional Brownian motion.
Are these germinating lentils communicating in quantum code?
Catalina Curceanu
The fact that the lentils’ biophoton emissions aren’t random is an indication that they could be communicating, says Benfatto. And that’s not all. Tantalisingly, the complexity in the second phase of the emissions is mathematically related to the equations of quantum mechanics. For this reason, Benfatto says his team’s work hints that signals displaying quantum coherence could have a role in directing lentil germination.
[…]
Part of the problem with designing experiments like these is that we don’t really know what quantum mechanical effects in living organisms look like. Any quantum effects discovered in lentils and other organisms would be “very different to textbook quantum mechanics”, says Scholes.
[…]
so far, the evidence for quantum lentils is sketchy. Still, he is pushing ahead with a new experimental design that makes the signal-to-noise ratio 100 times better. If you want to earwig on the clandestine whispers of these seeds, it might just help to get rid of their noisy neighbours, which is why he will study one germinating lentil at a time.
It sounds like a simple, well-known everyday phenomenon: there is high pressure in a champagne bottle, the stopper is driven outwards by the compressed gas in the bottle and flies away with a powerful pop. But the physics behind this is complicated.
[…]
Using complex computer simulations, it was possible to recalculate the behavior of the stopper and the gas flow.
In the process, astonishing phenomena were discovered: a supersonic shock wave is formed and the gas flow can reach more than one and a half times the speed of sound. The results, which appear on the pre-print server arXiv,
[…]
“The champagne cork itself flies away at a comparatively low speed, reaching perhaps 20 meters per second,”
[…]
“However, the gas that flows out of the bottle is much faster,” says Wagner. “It overtakes the cork, flows past it and reaches speeds of up to 400 meters per second.”
That is faster than the speed of sound. The gas jet therefore breaks the sound barrier shortly after the bottle is opened—and this is accompanied by a shock wave.
[…]
“Then there are jumps in these variables, so-called discontinuities,” says Bernhard Scheichl (TU Vienna & AC2T), Lukas Wagner’s dissertation supervisor. “Then the pressure or velocity in front of the shock wave have a completely different value than just behind it.”
This point in the gas jet, where the pressure changes abruptly, is also known as the “Mach disk.” “Very similar phenomena are also known from supersonic aircraft or rockets, where the exhaust jet exits the engines at high speed,”
[…]
The Mach disk first forms between the bottle and the cork and then moves back towards the bottle opening.
Temporarily colder than the North Pole
Not only the gas pressure, but also the temperature changes abruptly: “When gas expands, it becomes cooler, as we know from spray cans,” explains Lukas Wagner. This effect is very pronounced in the champagne bottle: the gas can cool down to -130°C at certain points. It can even happen that tiny dry ice crystals are formed from the CO2 that makes the sparkling wine bubble.
“This effect depends on the original temperature of the sparkling wine,” says Lukas Wagner. “Different temperatures lead to dry ice crystals of different sizes, which then scatter light in different ways. This results in variously colored smoke. In principle, you can measure the temperature of the sparkling wine by just looking at the color of the smoke.”
[…]
The audible pop when the bottle is opened is a combination of different effects: Firstly, the cork expands abruptly as soon as it has left the bottle, creating a pressure wave, and secondly, you can hear the shock wave, generated by the supersonic gas jet—very similar to the well-known aeroacoustic phenomenon of the sonic boom.
[…]
More information: Lukas Wagner et al, Simulating the opening of a champagne bottle, arXiv (2023). DOI: 10.48550/arxiv.2312.12271
The typical strategy when treating microbial infections is to blast the pathogen with an antibiotic drug, which works by getting inside the harmful cell and killing it. This is not as easy as it sounds, because any new antibiotic needs to be both water soluble, so that it can travel easily through the bloodstream, and oily, in order to cross the pathogenic cell’s first line of defense, the cellular membrane. Water and oil, of course, don’t mix, and it’s difficult to design a drug that has enough of both characteristics to be effective.
The difficulty doesn’t stop there, either, because pathogenic cells have developed something called an “efflux pump,” that can recognize antibiotics and then safely excrete them from the cell, where they can’t do any harm. If the antibiotic can’t overcome the efflux pump and kill the cell, then the pathogen “remembers” what that specific antibiotic looks like and develops additional efflux pumps to efficiently handle it—in effect, becoming resistant to that particular antibiotic.
One path forward is to find a new antibiotic, or combinations of them, and try to stay one step ahead of the superbugs.
“Or, we can shift our strategy,” says Alejandro Heuck, associate professor of biochemistry and molecular biology at UMass Amherst and the paper’s senior author.
[…]
Like the pathogenic cell, host cells also have thick, difficult-to-penetrate cell walls. In order to breach them, pathogens have developed a syringe-like machine that first secretes two proteins, known as PopD and PopB. Neither PopD nor PopB individually can breach the cell wall, but the two proteins together can create a “translocon”—the cellular equivalent of a tunnel through the cell membrane. Once the tunnel is established, the pathogenic cell can inject other proteins that do the work of infecting the host.
This entire process is called the Type 3 secretion system—and none of it works without both PopB and PopD. “If we don’t try to kill the pathogen,” says Heuck, “then there’s no chance for it to develop resistance. We’re just sabotaging its machine. The pathogen is still alive; it’s just ineffective, and the host has time to use its natural defenses to get rid of the pathogen.”
[..]
Heuck and his colleagues realized that an enzyme class called the luciferases—similar to the ones that cause lightning bugs to glow at night—could be used as a tracer. They split the enzyme into two halves. One half went into the PopD/PopB proteins, and the other half was engineered into a host cell.
These engineered proteins and hosts can be flooded with different chemical compounds. If the host cell suddenly lights up, that means that PopD/PopB successfully breached the cellular wall, reuniting the two halves of the luciferase, causing them to glow. But if the cells stay dark? “Then we know which molecules break the translocon,” says Heuck.
Heuck is quick to point out that his team’s research has not only obvious applications in the world of pharmaceuticals and public health, but that it also advances our understanding of exactly how microbes infect healthy cells. “We wanted to study how pathogens worked,” he says, “and then suddenly we discovered that our findings can help solve a public-health problem.”
More information: Hanling Guo et al, Cell-Based Assay to Determine Type 3 Secretion System Translocon Assembly in Pseudomonas aeruginosa Using Split Luciferase, ACS Infectious Diseases (2023). DOI: 10.1021/acsinfecdis.3c00482
According to a survey of U.S. adults, Americans in October 2023 were less likely to view approved vaccines as safe than they were in April 2021. As vaccine confidence falls, health misinformation continues to spread like wildfire on social media and in real life.
In my view, we cannot underestimate the dangers of health misinformation and the need to understand why it spreads and what we can do about it. Health misinformation is defined as any health-related claim that is false based on current scientific consensus.
Concerns with the COVID-19 vaccine leading to infertility. This connection has been debunked through a systematic review and meta-analysis, one of the most robust forms of synthesizing scientific evidence.
Safety concerns about vaccine ingredients, such as thimerosal, aluminum and formaldehyde. Extensive studies have shown these ingredients are safe when used in the minimal amounts contained in vaccines.
Vaccines as medically unnecessary to protect from disease. The development and dissemination of vaccines for life-threatening diseases such as smallpox, polio, measles, mumps, rubella and the flu has saved millions of lives. It also played a critical role in historic increases in average life expectancy – from 47 years in 1900 in the U.S. to 76 years in 2023.
The costs of health misinformation
Beliefs in such myths have come at the highest cost.
An estimated 319,000 COVID-19 deaths that occurred between January 2021 and April 2022 in the U.S. could have been prevented if those individuals had been vaccinated, according to a data dashboard from the Brown University School of Public Health. Misinformation and disinformation about COVID-19 vaccines alone have cost the U.S. economy an estimated US$50 million to $300 million per day in direct costs from hospitalizations, long-term illness, lives lost and economic losses from missed work.
Though vaccine myths and misunderstandings tend to dominate conversations about health, there is an abundance of misinformation on social media surrounding diets and eating disorders, smoking or substance use, chronic diseases and medical treatments.
For example, an analysis of Instagram and TikTok posts from 2022 to 2023 by The Washington Post and the nonprofit news site The Examination found that the food, beverage and dietary supplement industries paid dozens of registered dietitian influencers to post content promoting diet soda, sugar and supplements, reaching millions of viewers. The dietitians’ relationships with the food industry were not always made clear to viewers.
The lack of trust is both fueled and reinforced by the way misinformation can spread today. Social media platforms allow people to form information silos with ease; you can curate your networks and your feed by unfollowing or muting contradictory views from your own and liking and sharing content that aligns with your existing beliefs and value systems.
By tailoring content based on past interactions, social media algorithms can unintentionally limit your exposure to diverse perspectives and generate a fragmented and incomplete understanding of information. Even more concerning, a study of misinformation spread on Twitter analyzing data from 2006 to 2017 found that falsehoods were 70% more likely to be shared than the truth and spread “further, faster, deeper and more broadly than the truth” across all categories of information.
How to combat misinformation
The lack of robust and standardized regulation of misinformation content on social media places the difficult task of discerning what is true or false information on individual users. We scientists and research entities can also do better in communicating our science and rebuilding trust, as my colleague and I have previously written. I also provide peer-reviewed recommendations for the important roles that parents/caregivers, policymakers and social media companies can play.
Below are some steps that consumers can take to identify and prevent health misinformation spread:
Check the source. Determine the credibility of the health information by checking if the source is a reputable organization or agency such as the World Health Organization, the National Institutes of Health or the Centers for Disease Control and Prevention. Other credible sources include an established medical or scientific institution or a peer-reviewed study in an academic journal. Be cautious of information that comes from unknown or biased sources.
Examine author credentials. Look for qualifications, expertise and relevant professional affiliations for the author or authors presenting the information. Be wary if author information is missing or difficult to verify.
Pay attention to the date. Scientific knowledge by design is meant to evolve as new evidence emerges. Outdated information may not be the most accurate. Look for recent data and updates that contextualize findings within the broader field.
Cross-reference to determine scientific consensus. Cross-reference information across multiple reliable sources. Strong consensus across experts and multiple scientific studies supports the validity of health information. If a health claim on social media contradicts widely accepted scientific consensus and stems from unknown or unreputable sources, it is likely unreliable.
Question sensational claims. Misleading health information often uses sensational language designed to provoke strong emotions to grab attention. Phrases like “miracle cure,” “secret remedy” or “guaranteed results” may signal exaggeration. Be alert for potential conflicts of interest and sponsored content.
Weigh scientific evidence over individual anecdotes. Prioritize information grounded in scientific studies that have undergone rigorous research methods, such as randomized controlled trials, peer review and validation. When done well with representative samples, the scientific process provides a reliable foundation for health recommendations compared to individual anecdotes. Though personal stories can be compelling, they should not be the sole basis for health decisions.
Talk with a health care professional. If health information is confusing or contradictory, seek guidance from trusted health care providers who can offer personalized advice based on their expertise and individual health needs.
When in doubt, don’t share. Sharing health claims without validity or verification contributes to misinformation spread and preventable harm.
All of us can play a part in responsibly consuming and sharing information so that the spread of the truth outpaces the false.
Monica Wang receives funding from the National Institutes of Health.
Practicing yoga nidra — a kind of mindfulness training — might improve sleep, cognition, learning, and memory, even in novices, according to a pilot study publishing in the open-access journal PLOS ONE on December 13 by Karuna Datta of the Armed Forces Medical College in India, and colleagues. After a two-week intervention with a cohort of novice practitioners, the researchers found that the percentage of delta-waves in deep sleep increased and that all tested cognitive abilities improved.
Unlike more active forms of yoga, which focus on physical postures, breathing, and muscle control, yoga nidra guides people into a state of conscious relaxation while they are lying down. While it has reported to improve sleep and cognitive ability, those reports were based more on subjective measures than on objective data. The new study used objective polysomnographic measures of sleep and a battery of cognitive tests. Measurements were taken before and after two weeks of yoga nidra practice, which was carried out during the daytime using a 20 minute audio recording.
Among other things, polysomnography measures brain activity to determine how long each sleep stage lasts and how frequently each stage occurs. After two weeks of yoga nidra, the researchers observed that participants exhibited a significantly increased sleep efficiency and percentage of delta-waves in deep sleep. They also saw faster responses in all cognitive tests with no loss in accuracy and faster and more accurate responses in tasks including tests of working memory, abstraction, fear and anger recognition, and spatial learning and memory tasks. The findings support previous studies which link delta-wave sleep to improved sleep quality as well as better attention and memory.
Batteries that exploit quantum phenomena to gain, distribute and store power promise to surpass the abilities and usefulness of conventional chemical batteries in certain low-power applications. For the first time, researchers, including those from the University of Tokyo, take advantage of an unintuitive quantum process that disregards the conventional notion of causality to improve the performance of so-called quantum batteries, bringing this future technology a little closer to reality.
[…]
At present, quantum batteries only exist as laboratory experiments, and researchers around the world are working on the different aspects that are hoped to one day combine into a fully functioning and practical application. Graduate student Yuanbo Chen and Associate Professor Yoshihiko Hasegawa from the Department of Information and Communication Engineering at the University of Tokyo are investigating the best way to charge a quantum battery, and this is where time comes into play. One of the advantages of quantum batteries is that they should be incredibly efficient, but that hinges on the way they are charged.
While it’s still quite a bit bigger than the AA battery you might find around the home, the experimental apparatus acting as a quantum battery demonstrated charging characteristics that could one day improve upon the battery in your smartphone. Credit: Zhu et al, 2023
“Current batteries for low-power devices, such as smartphones or sensors, typically use chemicals such as lithium to store charge, whereas a quantum battery uses microscopic particles like arrays of atoms,” said Chen. “While chemical batteries are governed by classical laws of physics, microscopic particles are quantum in nature, so we have a chance to explore ways of using them that bend or even break our intuitive notions of what takes place at small scales. I’m particularly interested in the way quantum particles can work to violate one of our most fundamental experiences, that of time.”
[…]
the team instead used a novel quantum effect they call indefinite causal order, or ICO. In the classical realm, causality follows a clear path, meaning that if event A leads to event B, then the possibility of B causing A is excluded. However, at the quantum scale, ICO allows both directions of causality to exist in what’s known as a quantum superposition, where both can be simultaneously true.
Common intuition suggests that a more powerful charger results in a battery with a stronger charge. However, the discovery stemming from ICO introduces a remarkable reversal in this relationship; now, it becomes possible to charge a more energetic battery with significantly less power. Credit: Chen et al, 2023
“With ICO, we demonstrated that the way you charge a battery made up of quantum particles could drastically impact its performance,” said Chen. “We saw huge gains in both the energy stored in the system and the thermal efficiency. And somewhat counterintuitively, we discovered the surprising effect of an interaction that’s the inverse of what you might expect: A lower-power charger could provide higher energies with greater efficiency than a comparably higher-power charger using the same apparatus.”
The phenomenon of ICO the team explored could find uses beyond charging a new generation of low-power devices. The underlying principles, including the inverse interaction effect uncovered here, could improve the performance of other tasks involving thermodynamics or processes that involve the transfer of heat. One promising example is solar panels, where heat effects can reduce their efficiency, but ICO could be used to mitigate those and lead to gains in efficiency instead.
At the Zooniverse, anyone can be a researcherYou don’t need any specialised background, training, or expertise to participate in any Zooniverse projects. We make it easy for anyone to contribute to real academic research, on their own computer, at their own convenience.You’ll be able to study authentic objects of interest gathered by researchers, like images of faraway galaxies, historical records and diaries, or videos of animals in their natural habitats. By answering simple questions about them, you’ll help contribute to our understanding of our world, our history, our Universe, and more.With our wide-ranging and ever-expanding suite of projects, covering many disciplines and topics across the sciences and humanities, there’s a place for anyone and everyone to explore, learn and have fun in the Zooniverse. To volunteer with us, just go to the Projects page, choose one you like the look of, and get started.
[…]
Zooniverse projects are constructed with the aim of converting volunteers’ efforts into measurable results. These projects have produced a large number of published research papers, as well as several open-source sets of analyzed data. In some cases, Zooniverse volunteers have even made completely unexpected and scientifically significant discoveries.
A significant amount of this research takes place on the Zooniverse discussion boards, where volunteers can work together with each other and with the research teams. These boards are integrated with each project to allow for everything from quick hashtagging to in-depth collaborative analysis. There is also a central Zooniverse board for general chat and discussion about Zooniverse-wide matters.
Many of the most interesting discoveries from Zooniverse projects have come from discussion between volunteers and researchers. We encourage all users to join the conversation on the discussion boards for more in-depth participation.
A new study has identified a potentially growing natural hazard in the north: frostquakes. With climate change contributing to many observed changes in weather extremes, such as heavy precipitation and cold waves, these seismic events could become more common. Researchers were surprised by the role of wetlands and drainage channels in irrigated wetlands in origin of frostquakes.
Frostquakes are seismic events caused by the rapid freezing of water in the ground. They are most common during extreme winter conditions, when wet, snow-free ground freezes rapidly. They have been reported in northern Finland in 2016, 2019 and 2022, as well as in Chicago in 2019 and Ottawa in 2022, among others.
Roads and other areas cleared of snow in winter are particularly vulnerable to frostquakes.
[.,..]
We found that during the winter of 2022–2023 the main sources of frostquakes in Oulu, Finland were actually swamps, wetlands and areas with high water tables or other places where water accumulates,” says Elena Kozlovskaya, Professor of applied geophysics at the University of Oulu Mining School.
When water in the ground, accumulated during heavy rainfalls in autumn or melting of snow during warm winter weather, freezes and expands rapidly, it causes cracks in the ground, accompanied by tremors and booms. When occurred in populated areas, frostquakes, or cryoseisms, are felt by people and they can be accompanied by specific noises. Ground motions during frostquakes are comparable to those of other seismic events, such as more distant earthquakes, mining explosions and vibrations produced by freight trains. Frostquakes are also known phenomenon in permafrost regions.
The new study, currently available as a preprint and set to be published in the journal EGUsphere, is the first applied study of seismic events from marsh and wetland areas. Researchers from the University of Oulu, Finland and the Geological Survey of Finland (GTK) showed that fracturing in the uppermost frozen ground can be initiated if the thickness of frozen layer is about 5 cm and larger. Ruptures can propagate deeper and damage infrastructure such as buildings, basements, pipelines and roads.
“With climate change, rapid changes in weather patterns have brought frostquakes to the attention of the wider audience, and they may become more common. Although their intensity is usually low, a series of relatively strong frostquakes in Oulu, 2016, which ruptured roads, was the starting point for our research.
[…]
During several days when the air temperature was decreasing rapidly, the local residents reported ground tremors and unusual sounds to the researchers. These observations were used to identify frostquakes from seismic data. The conditions for a frostquake are favorable when the temperature drops to more than—20°C at a rate of about one degree per hour.
There are many wetlands close to seismic stations in Oulu near residential area where the main sources of frostquakes were detected. In Sodankylä, the frostquakes were in addition caused by ice fracturing in the Kitinen river. “Frostquakes have often occurred in January, but other times are also possible,” says Moisio.
During frost quakes, seismic surface waves produce high ground accelerations at distances of up to hundreds of meters. “The fractures during frostquakes seem to propagate along drainage channels near roads and in irrigated wetlands” Kozlovskaya says.
Irrigated wetlands and drainage channels are also abundant around residential areas.
[…]
Further studies will help to identify areas at risk of frostquakes, which will help to prepare and protect the built environment from this specific natural hazard. Researchers at the University of Oulu and GTK aim to create a system that could predict frostquakes based on soil analysis and satellite data.
More information: Nikita Afonin et al, Frost quakes in wetlands in northern Finland during extreme winter weather conditions and related hazard to urban infrastructure (2023). DOI: 10.5194/egusphere-2023-1853
Balls of human brain cells linked to a computer have been used to perform a very basic form of speech recognition. The hope is that such systems will use far less energy for AI tasks than silicon chips.
“This is just proof-of-concept to show we can do the job,” says Feng Guo at Indiana University Bloomington. “We do have a long way to go.”
Brain organoids are lumps of nerve cells that form when stem cells are grown in certain conditions. “They are like mini-brains,” says Guo.
It takes two or three months to grow the organoids, which are a few millimetres wide and consist of as many as 100 million nerve cells, he says. Human brains contain around 100 billion nerve cells.
The organoids are then placed on top of a microelectrode array, which is used both to send electrical signals to the organoid and to detect when nerve cells fire in response. The team calls its system “Brainoware”.
For the speech recognition task, the organoids had to learn to recognise the voice of one individual from a set of 240 audio clips of eight people pronouncing Japanese vowel sounds. The clips were sent to the organoids as sequences of signals arranged in spatial patterns.
The organoids’ initial responses had an accuracy of around 30 to 40 per cent, says Guo. After training sessions over two days, their accuracy rose to 70 to 80 per cent.
“We call this adaptive learning,” he says. If the organoids were exposed to a drug that stopped new connections forming between nerve cells, there was no improvement.
The training simply involved repeating the audio clips, and no form of feedback was provided to tell the organoids if they were right or wrong, says Guo. This is what is known in AI research as unsupervised learning.
There are two big challenges with conventional AI, says Guo. One is its high energy consumption. The other is the inherent limitations of silicon chips, such as their separation of information and processing.
Titouan Parcollet at the University of Cambridge, who works on conventional speech recognition, doesn’t rule out a role for biocomputing in the long run.
“However, it might also be a mistake to think that we need something like the brain to achieve what deep learning is currently doing,” says Parcollet. “Current deep-learning models are actually much better than any brain on specific and targeted tasks.”
Guo and his team’s task is so simplified that it is only identifies who is speaking, not what the speech is, he says. “The results aren’t really promising from the speech recognition perspective.”
Even if the performance of Brainoware can be improved, another major issue with it is that the organoids can only be maintained for one or two months, says Guo. His team is working on extending this.
“If we want to harness the computation power of organoids for AI computing, we really need to address those limitations,” he says.
The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or two to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.
Space-time is a curious thing. Look around and it’s easy enough to visualise what the space component is in the abstract. It’s three dimensions: left-right, forwards-backwards and up-down. It’s a graph with an…
x, y and z axis. Time, too, is easy enough. We’re always moving forwards in time so we might visualise it as a straight line or one big arrow. Every second is a little nudge forwards.
But space-time, well that’s a little different. Albert Einstein fused space and time together in his theories of relativity. The outcome was a new fabric of reality, a thing called space-time that permeates the universe. How gravity works popped out of the explorations of this new way of thinking. Rather than gravity being a force that somehow operates remotely through space, Einstein proposed that bodies curve space-time, and it is this curvature that causes them to be gravitationally drawn to each other. Our very best descriptions of the cosmos begin with space-time.
Yet, visualising it is next to impossible. The three dimensions of space and one of time give four dimensions in total. But space-time itself is curved, as Einstein proposed. That means to really imagine it, you need a fifth dimension to curve into.
Luckily, all is not lost. There is a mathematical trick to visualising space-time that I’ve come up with. It’s a simplified way of thinking that not only illustrates how space-time can be curved, but also how such curvature can draw bodies towards each other. It can give you new insight into how gravity works in our cosmos.
First, let’s start with a typical way to draw space-time. Pictures like the one below are meant to illustrate Einstein’s idea that gravity arises in the universe from massive objects distorting space-time. Placing a small object, say a marble, near one of these dimples would result in it rolling towards one of the larger objects, in much the same way that gravity pulls objects together.
The weight of different space objects influences the distortion of space-and-time
Manil Suri
However, the diagram is missing a lot. While the objects depicted are three dimensional, the space they’re curving is only two dimensional. Moreover, time seems to have been entirely omitted, so it’s pure space – not space-time – that’s curving.
Here’s my trick to get around this: simplify things by letting space be only one dimensional. This makes the total number of space-time dimensions a more manageable two.
Now we can represent our 1-D space by the double-arrowed horizontal line in the left panel of the diagram below. Let time be represented by the perpendicular direction, giving a two-dimensional space-time plane. This plane is then successive snapshots, stacked one on top of the other, of where objects are located in the single space dimension at each instant.
Suppose now there are objects – say particles – at points A and B in our universe. Then if these particles remained at rest, their trajectories through space-time would just be the two parallel paths AA’ and BB’ as shown. This simply represents the fact that for every time instant, the particles remain exactly where they are in 1-D space. Such behaviour is what we’d expect in the absence of gravity or any other forces.
However, if gravity came into play, we would expect the two particles to draw closer to each other as time went on. In other words, A’ would be much closer to B’ than A was to B.
Now what if gravity, as Einstein proposed, wasn’t a force in the usual sense? What if it couldn’t act directly on A and B to bring them closer, but rather, could only cause such an effect by deforming the 2-D space-time plane? Would there be a suitable such deformation that would still result in A’ getting closer to B’?
Manil Suri
The answer is yes. Were the plane drawn on a rubber sheet, you could stretch it in various ways to easily verify that many such deformations exist. The one we’ll pick (why exactly, we’ll see below) is to wrap the plane around a sphere, as shown in the middle panel. This can be mathematically accomplished by the same method used to project a rectangular map of the world onto a globe. The formula this involves (called the “equirectangular projection”) has been known for almost two millennia: vertical lines on the rectangle correspond to lines of longitude on the sphere and horizontal ones to lines of latitude. You can see from the right panel that A’ has indeed gotten closer to B’, just as we might expect under gravity.
On the plane, the particles follow the shortest paths between A and A’, and B and B’, respectively. These are just straight lines. On the sphere, the trajectories AA’ and BB’ still represent shortest distance paths. This is because the shortest distance between two points on a spherical surface is always along one of the circles of maximal radius (these include, e.g., lines of longitude and the equator). Such curves that produce the shortest distance are called geodesics. So the geodesics AA’ and BB’ on the plane get transformed to corresponding geodesics on the sphere. (This wouldn’t necessarily happen for an arbitrary deformation, which is why we chose our wrapping around the sphere.)
Einstein postulated that particles not subject to external forces will always move through space-time along such “shortest path” geodesics. In the absence of gravity, these geodesics are just straight lines. Gravity, when introduced, isn’t counted as an external force. Rather, its effect is to curve space-time, hence changing the geodesics. The particles now follow these new geodesics, causing them to draw closer.
This is the key visualisation afforded by our simplified description of space-time. We can begin to understand how gravity, rather than being a force that acts mysteriously at a distance, could really be a result of geometry. How it can act to pull objects together via curvature built into space-time.
The above insight was fundamental to Einstein’s incorporation of gravity into his general theory of relativity. The actual theory is much more complicated, since space-time only curves in the local vicinity of bodies, not globally, as in our model. Moreover, the geometry involved must also respect the fact that nothing can travel faster than the speed of light. This effectively means that the concept of “shortest distance” has to also be modified, with the time dimension having to be treated very differently from the space dimensions.
Nevertheless, Einstein’s explanation posits, for instance, that the sun’s mass curves space-time in our solar system. That is why planets revolve around the sun rather than flying off in straight lines – they are just following the curved geodesics in this deformed space-time.
This has been confirmed by measuring how light from distant astronomical sources gets distorted by massive galaxies. Space-time truly is curved in our universe, it’s not just a mathematical convenience.
There’s a classical Buddhist parable about a group of blind men relying only on touch to figure out an animal unfamiliar to them – an elephant. Space-time is our elephant here – we can never hope to see it in its full 4-D form, or watch it curve to cause gravity. But the simplified visualisation presented here can help us better understand it .
The world has reached a pivotal moment as threats from Earth system tipping points – and progress towards positive tipping points – accelerate, a new report shows
Story highlights
Rapid changes to nature and societies already happening, and more coming
The report makes six key recommendations to change course fast
A cascade of positive tipping points would save millions of lives
Humanity is currently on a disastrous trajectory, according to the Global Tipping Points report, the most comprehensive assessment of tipping points ever conducted.
The report makes six key recommendations to change course fast, including coordinated action to trigger positive tipping points.
Behind the report is an international team of more than 200 scientists, coordinated by the University of Exeter, in partnership with Bezos Earth Fund. Centre researchers David Armstrong McKay, Steven Lade, Laura Pereira, and Johan Rockström have all contributed to the report.
A tipping point occurs when a small change sparks an often rapid and irreversible transformation, and the effects can be positive or negative.
Based on an assessment of 26 negative Earth system tipping points, the report concludes “business as usual” is no longer possible – with rapid changes to nature and societies already happening, and more coming.
With global warming now on course to breach 1.5°C, at least five Earth system tipping points are likely to be triggered – including the collapse of major ice sheets and widespread mortality of warm-water coral reefs.
As Earth system tipping points multiply, there is a risk of catastrophic, global-scale loss of capacity to grow staple crops. Without urgent action to halt the climate and ecological crisis, societies will be overwhelmed as the natural world comes apart.
Impacts of physical tipping points could trigger social tipping such as financial destabilization, disruption of social cohesion, and violent conflict that would further amplify impacts on people.
Centre researcher Steven Lade
Positive tipping points
But there are ways forward. Emergency global action – accelerated by leaders meeting now at COP28 – can harness positive tipping points and steer us towards a thriving, sustainable future.
The report authors lay out a out a blueprint for doing this, and says bold, coordinated policies could trigger positive tipping points across multiple sectors including energy, transport, and food.
A cascade of positive tipping points would save millions of lives, billions of people from hardship, trillions of dollars in climate-related damage, and begin restoring the natural world upon which we all depend.
Phase out fossil fuels and land-use emissions now, stopping them well before 2050.
Strengthen adaptation and “loss and damage” governance, recognising inequality between and within nations.
Include tipping points in the Global Stocktake (the world’s climate “inventory”) and Nationally Determined Contributions (each country’s efforts to tackle climate change)
Coordinate policy efforts to trigger positive tipping points.
Convene an urgent global summit on tipping points.
Deepen knowledge of tipping points. The research team supports calls for an IPCC Special Report on tipping points.
This report was released at COP28 and is being taken extremely seriously by scientists and news people alike – as it should be. Stuff really does need to happen and it’s positive that there are possibly points that we can use to tip the balance in our favour.
Using realistic ecological modeling, scientists led by Western Sydney University’s Jürgen Knauer found that the globe’s vegetation could actually be taking on about 20% more of the CO2 humans have pumped into the atmosphere and will continue to do so through to the end of the century.
“What we found is that a well-established climate model that is used to feed into global climate assessments by the likes of the IPCC (Intergovernmental Panel on Climate Change) predicts stronger and sustained carbon uptake until the end of the 21st century when extended to account for the impact of some critical physiological processes that govern how plants conduct photosynthesis,” said Knauer.
[…]
Current models, the team adds, are not that complex so likely underestimate future CO2 uptake by vegetation.
[…]
Taking the well-established Community Atmosphere-Biosphere Land Exchange model (CABLE), the team accounted for three physiological factors […] the team found that the most complex version, which accounted for all three factors, predicted the most CO2 uptake, around 20% more than the simplest formula.
[…]
“Our understanding of key response processes of the carbon cycle, such as plant photosynthesis, have advanced dramatically in recent years,” said Ben Smith, professor and research director of Western Sydney University’s Hawkesbury Institute for the Environment. “It always takes a while for new knowledge to make it into the sophisticated models we rely on to inform climate and emissions policy. Our study demonstrates that by fully accounting for the latest science in these models can lead to materially different predictions.
[…]
And while it’s somewhat good news, the team says plants can’t be expected to do all the heavy lifting; the onus remains on governments to stick to emission reduction obligations. However, the modeling makes a strong case for the value of greening projects and their importance in comprehensive approaches to tackling global warming.
Every clock has two fundamental properties: a certain precision and a certain time resolution. The time resolution indicates how small the time intervals are that can be measured—i.e., how quickly the clock ticks. Precision tells you how much inaccuracy you have to expect with every single tick.
The research team was able to show that since no clock has an infinite amount of energy available (or generates an infinite amount of entropy), it can never have perfect resolution and perfect precision at the same time. This sets fundamental limits to the possibilities of quantum computers.
[…]
Marcus Huber and his team investigated in general which laws must always apply to every conceivable clock. “Time measurement always has to do with entropy,” explains Marcus Huber. In every closed physical system, entropy increases and it becomes more and more disordered. It is precisely this development that determines the direction of time: the future is where the entropy is higher, and the past is where the entropy is even lower.
As can be shown, every measurement of time is inevitably associated with an increase in entropy: a clock, for example, needs a battery, the energy of which is ultimately converted into frictional heat and audible ticking via the clock’s mechanics—a process in which a fairly ordered state occurs the battery is converted into a rather disordered state of heat radiation and sound.
On this basis, the research team was able to create a mathematical model that basically every conceivable clock must obey. “For a given increase in entropy, there is a tradeoff between time resolution and precision,” says Florian Meier, first author of the second paper, now posted to the arXiv preprint server. “That means: Either the clock works quickly or it works precisely—both are not possible at the same time.”
[…]
“Currently, the accuracy of quantum computers is still limited by other factors, for example, the precision of the components used or electromagnetic fields. But our calculations also show that today we are not far from the regime in which the fundamental limits of time measurement play the decisive role.”
[…]
More information: Florian Meier et al, Fundamental accuracy-resolution trade-off for timekeeping devices, arXiv (2023). DOI: 10.48550/arxiv.2301.05173
Dirty air killed more than half a million people in the EU in 2021, estimates show, and about half of the deaths could have been avoided by cutting pollution to the limits recommended by doctors.
The researchers from the European Environment Agency attributed 253,000 early deaths to concentrations of fine particulates known as PM2.5 that breached the World Health Organization’s maximum guideline limits of 5µg/m3. A further 52,000 deaths came from excessive levels of nitrogen dioxide and 22,000 deaths from short-term exposure to excessive levels of ozone.
“The figures released today by the EEA remind us that air pollution is still the number one environmental health problem in the EU,” said Virginijus Sinkevičius, the EU’s environment commissioner.
Doctors say air pollution is one of the biggest killers in the world but death tolls will drop quickly if countries clean up their economies. Between 2005 and 2021, the number of deaths from PM2.5 in the EU fell 41%, and the EU aims to reach 55% by the end of the decade.
Researchers at the Zurich-based ETH public university, along with a US-based startup called Inkbit, have done the impossible. They’ve printed a robot hand complete with bones, ligaments and tendons for the very first time, representing a major leap forward in 3D printing technology. It’s worth noting that the various parts of the hand were printed simultaneously, and not cobbled together after the fact, as indicated in a research journal published in Nature.
Each of the robotic hand’s various parts were made from different polymers of varying softness and rigidity, using a new laser-scanning technique that lets 3D printers create “special plastics with elastic qualities” all in one go. This obviously opens up new possibilities in the fast-moving field of prosthetics, but also in any field that requires the production of soft robotic structures.
Basically, the researchers at Inkbit developed a method to 3D print slow-curing plastics, whereas the technology was previously reserved for fast-curing plastics. This hybrid printing method presents all kinds of advantages when compared to standard fast-cure projects, such as increased durability and enhanced elastic properties. The tech also allows us to mimic nature more accurately, as seen in the aforementioned robotic hand.
“Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” ETH Zurich robotics professor Robert Katzschmann writes in the study.
ETH Zurich/Thomas Buchner
This advancement still prints layer-by-layer, but an integrated scanner constantly checks the surface for irregularities before telling the system to move onto the next material type. Additionally, the extruder and scraper have been updated to allow for the use of slow-curing polymers. The stiffness can be fine-tuned for creating unique objects that suit various industries. Making human-like appendages is one use case scenario, but so is manufacturing objects that soak up noise and vibrations.
MIT-affiliated startup Inkbit helped develop this technology and has already begun thinking about how to make money off of it. The company will soon start to sell these newly-made printers to manufacturers but will also sell complex 3D-printed objects that make use of the technology to smaller entities.
Synex Medical, a Toronto-based biotech research firm backed by Sam Altman (the CEO of OpenAI), has developed a tool that can measure your blood glucose levels without a finger prick. It uses a combination of low-field magnets and low-frequency radio waves to directly measure blood sugar levels non-invasively when a user inserts a finger into the device.
The tool uses magnetic resonance spectroscopy (MRS), which is similar to an MRI. Jamie Near, an Associate Professor at the University of Toronto who specializes in the research of MRS technology told Engadget that, “[an] MRI uses magnetic fields to make images of the distribution of hydrogen protons in water that is abundant in our body tissues. In MRS, the same basic principles are used to detect other chemicals that contain hydrogen.” When a user’s fingertip is placed inside the magnetic field, the frequency of a specific molecule, in this case glucose, is measured in parts per million. While the focus was on glucose for this project, MRS could be used to measure metabolites, according to the Synex, including lactate, ketones and amino acids.
[…]
“MRI machines can fit an entire human body and have been used to target molecule concentrations in the brain through localized spectroscopy,” he explained. “Synex has shrunk this technology to measure concentrations in a finger. I have reviewed their white paper and seen the instrument work.” Simpson said Synex’s ability to retrofit MRS technology into a small box is an engineering feat.
[…]
But there is competition in the space for no-prick diagnostics tools. Know Labs is trying to get approval for a portable glucose monitor that relies on a custom-made Bio-RFID sensing technology, which uses radio waves to detect blood glucose levels in the palm of your hand. When the Know Labs device was tested up against a Dexcom G6 continuous glucose monitor in a study, readings of blood glucose levels using its palm sensor technology were “within threshold” only 46 percent of the time. While the readings are technically in accordance with FDA accuracy limits for a new blood glucose monitor, Know Labs is still working out kinks through scientific research before it can begin FDA clinical trials.
Another start-up, German company DiaMonTech, is currently developing a pocket-sized diagnostic device that is still being tested and fine-tuned to measure glucose through “photothermal detection.” It uses mid-infrared lasers that essentially scan the tissue fluid at the fingertip to detect glucose molecules. CNBCand Bloomberg reported that even Apple has been “quietly developing” a sensor that can check your blood sugar levels through its wearables, though the company never confirmed. A scientific director at Synex, Mohana Ray, told Engadget that eventually, the company would like to develop a wearable. But further miniaturization was needed before they could bring a commercial product to market.
An international team of scientists has reconstructed a historic record of the atmospheric trace gas carbon monoxide by measuring air in polar ice and air collected at an Antarctic research station.
The team, led by the French National Centre for Scientific Research (CNRS) and Australia’s national science agency, CSIRO, assembled the first complete record of carbon monoxide concentrations in the southern hemisphere, based on measurements of air.
The findings are published in the journal Climate of the Past.
The record spans the last three millennia. CSIRO atmospheric scientist Dr. David Etheridge said that the record provides a rare positive story in the context of climate change.
“Atmospheric carbon monoxide started climbing from its natural background level around the time of the industrial revolution, accelerating in the mid-1900s and peaking in the early-mid 1980s,” Dr. Etheridge said.
“The good news is that levels of the trace gas are now stable or even trending down and have been since the late 1980s—coinciding with the introduction of catalytic converters in cars.”
Carbon monoxide is a reactive gas that has important indirect effects on global warming. It reacts with hydroxyl (OH) radicals in the atmosphere, reducing their abundance. Hydroxyl acts as a natural “detergent” for the removal of other gases contributing to climate change, including methane. Carbon monoxide also influences the levels of ozone in the lower atmosphere. Ozone is a greenhouse gas.
The authors have high confidence that a major cause of the late 1980s-decline was improved combustion technologies including the introduction of catalytic converters, an exhaust systems device used in vehicles.
“The stabilization of carbon monoxide concentrations since the 1980s is a fantastic example of the role that science and technology can play in helping us understand a problem and help address it,” Dr. Etheridge said.
[…]
“Because carbon monoxide is a reactive gas, it is difficult to measure long term trends because it is unstable in many air sample containers. Cold and clean polar ice however preserves carbon monoxide concentrations for millennia,” Dr. Etheridge said.
The CO data will be used to improve Earth systems models. This will primarily enable scientists to understand the effects that future emissions of CO and other gases (such as hydrogen) will have on pollution levels and climate as the global energy mix changes into the future.
More information: Xavier Faïn et al, Southern Hemisphere atmospheric history of carbon monoxide over the late Holocene reconstructed from multiple Antarctic ice archives, Climate of the Past (2023). DOI: 10.5194/cp-19-2287-2023