‘Writing’ with atoms could transform materials fabrication for quantum devices

[…]A research team at the Department of Energy’s Oak Ridge National Laboratory has created a novel advanced microscopy tool to “write” with atoms, placing those atoms exactly where they are needed to give a material new properties.

“By working at the , we also work at the scale where quantum properties naturally emerge and persist,” said Stephen Jesse, a materials scientist who leads this research and heads the Nanomaterials Characterizations section at ORNL’s Center for Nanophase Materials Sciences, or CNMS.

[…]

o accomplish improved control over atoms, the research team created a tool they call a synthescope for combining synthesis with advanced microscopy. The researchers use a , or STEM, transformed into an atomic-scale material manipulation platform.

The synthescope will advance the state of the art in fabrication down to the level of the individual building blocks of materials. This new approach allows researchers to place different atoms into a material at specific locations; the new atoms and their locations can be selected to give the material new properties.

[…]

https://www.youtube.com/watch?v=I5FSc-lqI6s

We realized that if we have a microscope that can resolve atoms, we may be able to use the same microscope to move atoms or alter materials with atomic precision. We also want to be able to add atoms to the structures we create, so we need a supply of atoms. The idea morphed into an atomic-scale synthesis platform—the synthescope.”

That is important because the ability to tailor materials atom-by-atom can be applied to many future technological applications in quantum information science, and more broadly in microelectronics and catalysis, and for gaining a deeper understanding of materials synthesis processes. This work could facilitate atomic-scale manufacturing, which is notoriously challenging.

“Simply by the fact that we can now start putting atoms where we want, we can think about creating arrays of atoms that are precisely positioned close enough together that they can entangle, and therefore share their , which is key to making quantum devices more powerful than conventional ones,” Dyck said.

Such devices might include quantum computers—a proposed next generation of computers that may vastly outpace today’s fastest supercomputers; quantum sensors; and quantum communication devices that require a source of a single photon to create a secure quantum communications system.

“We are not just moving atoms around,” Jesse said. “We show that we can add a variety of atoms to a material that were not previously there and put them where we want them. Currently there is no technology that allows you to place different elements exactly where you want to place them and have the right bonding and structure. With this technology, we could build structures from the atom up, designed for their electronic, optical, chemical or structural properties.”

The scientists, who are part of the CNMS, a nanoscience research center and DOE Office of Science user facility, detailed their research and their vision in a series of four papers in scientific journals over the course of a year, starting with proof of principle that the synthescope could be realized. They have applied for a patent on the technology.

“With these papers, we are redirecting what atomic-scale fabrication will look like using electron beams,” Dyck said. “Together these manuscripts outline what we believe will be the direction atomic fabrication technology will take in the near future and the change in conceptualization that is needed to advance the field.”

By using an , or e-beam, to remove and deposit the atoms, the ORNL scientists could accomplish a direct writing procedure at the atomic level.

“The process is remarkably intuitive,” said ORNL’s Andrew Lupini, STEM group leader and a member of the research team. “STEMs work by transmitting a high-energy e-beam through a material. The e-beam is focused to a point smaller than the distance between atoms and scans across the material to create an image with atomic resolution. However, STEMs are notorious for damaging the very materials they are imaging.”

The scientists realized they could exploit this destructive “bug” and instead use it as a constructive feature and create holes on purpose. Then, they can put whatever atom they want in that hole, exactly where they made the defect. By purposely damaging the material, they create a new material with different and useful properties.

[…]

To demonstrate the method, the researchers moved an e-beam back and forth over a graphene lattice, creating minuscule holes. They inserted tin atoms into those holes and achieved a continuous, atom-by-atom, direct writing process, thereby populating the exact same places where the carbon atom had been with tin atoms.

[…]

Source: ‘Writing’ with atoms could transform materials fabrication for quantum devices

Scientists Detect Invisible Electric Field Around Earth For First Time

An invisible, weak energy field wrapped around our planet Earth has finally been detected and measured.

It’s called the ambipolar field, an electric field first hypothesized more than 60 years ago

[…]

“Any planet with an atmosphere should have an ambipolar field,” says astronomer Glyn Collinson of NASA’s Goddard Space Flight Center.

“Now that we’ve finally measured it, we can begin learning how it’s shaped our planet as well as others over time.”

Earth isn’t just a blob of dirt sitting inert in space. It’s surrounded by all sorts of fields. There’s the gravity field.

[…]

There’s also the magnetic field, which is generated by the rotating, conducting material in Earth’s interior, converting kinetic energy into the magnetic field that spins out into space.

[…]

In 1968, scientists described a phenomenon that we couldn’t have noticed until the space age. Spacecraft flying over Earth’s poles detected a supersonic wind of particles escaping from Earth’s atmosphere. The best explanation for this was a third, electric energy field.

“It’s called the ambipolar field and it’s an agent of chaos. It counters gravity, and it strips particles off into space,” Collinson explains in a video.

“But we’ve never been able to measure this before because we haven’t had the technology. So, we built the Endurance rocket ship to go looking for this great invisible force.”

[…]

Here’s how the ambipolar field was expected to work. Starting at an altitude of around 250 kilometers (155 miles), in a layer of the atmosphere called the ionosphere, extreme ultraviolet and solar radiation ionizes atmospheric atoms, breaking off negatively charged electrons and turning the atom into a positively charged ion.

The lighter electrons will try to fly off into space, while the heavier ions will try to sink towards the ground. But the plasma environment will try to maintain charge neutrality, which results in the emergence of an electric field between the electrons and the ions to tether them together.

This is called the ambipolar field because it works in both directions, with the ions supplying a downward pull and the electrons an upward one.

The result is that the atmosphere is puffed up; the increased altitude allows some ions to escape into space, which is what we see in the polar wind.

This ambipolar field would be incredibly weak, which is why Collinson and his team designed instrumentation to detect it. The Endurance mission, carrying this experiment, was launched in May 2022, reaching an altitude of 768.03 kilometers (477.23 miles) before falling back to Earth with its precious, hard-won data.

And it succeeded. It measured a change in electric potential of just 0.55 volts – but that was all that was needed.

“A half a volt is almost nothing – it’s only about as strong as a watch battery,” Collinson says. “But that’s just the right amount to explain the polar wind.”

That amount of charge is enough to tug on hydrogen ions with 10.6 times the strength of gravity, launching them into space at the supersonic speeds measured over Earth’s poles.

Oxygen ions, which are heavier than hydrogen ions, are also lofted higher, increasing the density of the ionosphere at high altitudes by 271 percent, compared to what its density would be without the ambipolar field.

[…]

The research has been published in Nature.

Source: Scientists Detect Invisible Electric Field Around Earth For First Time : ScienceAlert

Doughnut-shaped region found inside Earth’s core deepens understanding of planet’s magnetic field

A doughnut-shaped region thousands of kilometers beneath our feet within Earth’s liquid core has been discovered by scientists from The Australian National University (ANU), providing new clues about the dynamics of our planet’s magnetic field.

The structure within Earth’s liquid core is found only at low latitudes and sits parallel to the equator. According to ANU seismologists, it has remained undetected until now.

The Earth has two core layers: the , a solid layer, and the outer core, a liquid layer. Surrounding the Earth’s core is the mantle. The newly discovered doughnut-shaped region is at the top of Earth’s outer core, where the liquid core meets the mantle.

Study co-author and ANU geophysicist, Professor Hrvoje Tkalčić, said the seismic waves detected are slower in the newly discovered region than in the rest of the liquid outer core.

[…]

“We don’t know the exact thickness of the doughnut, but we inferred that it reaches a few hundred kilometers beneath the core-mantle boundary.”

Rather than using traditional seismic wave observation techniques and observing signals generated by earthquakes within the first hour, the ANU scientists analyzed the similarities between waveforms many hours after the earthquake origin times, leading them to make the unique discovery.

“By understanding the geometry of the paths of the waves and how they traverse the outer core’s volume, we reconstructed their through the Earth, demonstrating that the newly discovered has low seismic speeds,” Professor Tkalčić said.

“The peculiar structure remained hidden until now as previous studies collected data with less volumetric coverage of the outer core by observing waves that were typically confined within one hour after the origin times of large earthquakes.

[…]

“Our findings are interesting because this low velocity within the liquid core implies that we have a high concentration of light chemical elements in these regions that would cause the seismic waves to slow down. These light elements, alongside temperature differences, help stir liquid in the ,” Professor Tkalčić said.

[…]

The research is published in Science Advances.

More information: Xiaolong Ma et al, Seismic low-velocity equatorial torus in the Earth’s outer core: Evidence from the late-coda correlation wavefield, Science Advances (2024). DOI: 10.1126/sciadv.adn5562

Source: Doughnut-shaped region found inside Earth’s core deepens understanding of planet’s magnetic field

String Theorists Accidentally Find a New Formula for Pi

[…] most recently in January 2024, when physicists Arnab Priya Saha and Aninda Sinha of the Indian Institute of Science presented a completely new formula for calculating it, which they later published in Physical Review Letters.

Saha and Sinha are not mathematicians. They were not even looking for a novel pi equation. Rather, these two string theorists were working on a unifying theory of fundamental forces, one that could reconcile electromagnetism, gravity and the strong and weak nuclear forces

[…]

For millennia, mankind has been trying to determine the exact value of pi. […]

One famous example is Archimedes, who estimated pi with the help of polygons: by drawing an n-sided polygon inside and one outside a circle and calculating the perimeter of each, he was able to narrow down the value of pi.

Three circles are bounded by polygons with an increasing number of sides.

A common method for determining pi geometrically involves drawing a bounding polygon inside and outside a circle and then comparing the two perimeters.

Fredrik/Leszek Krupinski/Wikimedia Commons

Teachers often present this method in school

[…]

In the 15th century experts found infinite series as a new way to express pi. […]

For example, the Indian scholar Madhava, who lived from 1350 to 1425, found that pi equals 4 multiplied by a series that begins with 1 and then alternately subtracts or adds fractions in which 1 is placed over successively higher odd numbers (so 1/3, 1/5, and so on). One way to express this would be:

A formula presents how pi can be calculated using a series developed by the Indian scholar Madhava.

This formula makes it possible to determine pi as precisely as you like in a very simple way.

[…]

As Saha and Sinha discovered more than 600 years later, Madhava’s formula is only a special case of a much more general equation for calculating pi. In their work, the string theorists discovered the following formula:

A formula presents a way of calculating pi that was identified by physicists Arnab Priya Saha and Aninda Sinha.

This formula produces an infinitely long sum. What is striking is that it depends on the factor λ , a freely selectable parameter. No matter what value λ has, the formula will always result in pi. And because there are infinitely many numbers that can correspond to λ, Saha and Sinha have found an infinite number of pi formulas.

If λ is infinitely large, the equation corresponds to Madhava’s formula. That is, because λ only ever appears in the denominator of fractions, the corresponding fractions for λ = ∞ become zero (because fractions with large denominators are very small). For λ = ∞, the equation of Saha and Sinha therefore takes the following form:

Saha and Sinha’s formula can be adapted based on the assumption of an infinitely large parameter.

The first part of the equation is already similar to Madhava’s formula: you sum fractions with odd denominators.

[…]

As the two string theorists report, however, pi can be calculated much faster for smaller values of λ. While Madhava’s result requires 100 terms to get within 0.01 of pi, Saha and Sinha’s formula for λ = 3 only requires the first four summands. “While [Madhava’s] series takes 5 billion terms to converge to 10 decimal places, the new representation with λ between 10 [and] 100 takes 30 terms,” the authors write in their paper. Saha and Sinha did not find the most efficient method for calculating pi, though. Other series have been known for several decades that provide an astonishingly accurate value much more quickly. What is truly surprising in this case is that the physicists came up with a new pi formula when their paper aimed to describe the interaction of strings.

[…]

Source: String Theorists Accidentally Find a New Formula for Pi | Scientific American

Researchers figure out how to keep clocks on the Earth, Moon in sync

[…] Our communications and GPS networks all depend on keeping careful track of the precise timing of signals—including accounting for the effects of relativity. The deeper into a gravitational well you go, the slower time moves, and we’ve reached the point where we can detect differences in altitude of a single millimeter. Time literally flows faster at the altitude where GPS satellites are than it does for clocks situated on Earth’s surface. Complicating matters further, those satellites are moving at high velocities, an effect that slows things down.

[…]

It would be easy to set up an equivalent system to track time on the Moon, but that would inevitably see the clocks run out of sync with those on Earth—a serious problem for things like scientific observations

[…]

Ashby and Patla worked on developing a system where anything can be calculated in reference to the center of mass of the Earth/Moon system. Or, as they put it in the paper, their mathematical system “enables us to compare clock rates on the Moon and cislunar Lagrange points with respect to clocks on Earth by using a metric appropriate for a locally freely falling frame such as the center of mass of the Earth–Moon system in the Sun’s gravitational field.”

[…]

The paper’s body has 55 of them, and there are another 67 in the appendices.

[…]

Things get complicated because there are so many factors to consider. There are tidal effects from the Sun and other planets. Anything on the surface of the Earth or Moon is moving due to rotation; other objects are moving while in orbit. The gravitational influence on time will depend on where an object is located.

[…]

he researchers say that their approach, while focused on the Earth/Moon system, is still generalizable. Which means that it should be possible to modify it and create a frame of reference that would work on both Earth and anywhere else in the Solar System. Which, given the pace at which we’ve sent things beyond low-Earth orbit, is probably a healthy amount of future-proofing.

The Astronomical Journal, 2024. DOI: 10.3847/1538-3881/ad643a  (About DOIs).

Source: Researchers figure out how to keep clocks on the Earth, Moon in sync | Ars Technica

Dual action antibiotic could make bacterial resistance nearly impossible

A new antibiotic that works by disrupting two different cellular targets would make it 100 million times more difficult for bacteria to evolve resistance, according to new research from the University of Illinois Chicago.

For a new paper in Nature Chemical Biology, researchers probed how a class of synthetic drugs called macrolones disrupt bacterial cell function to fight infectious diseases. Their experiments demonstrate that macrolones can work two different ways—either by interfering with protein production or corrupting DNA structure.

Because would need to implement defenses to both attacks simultaneously, the researchers calculated that is nearly impossible.

“The beauty of this antibiotic is that it kills through two different targets in bacteria,” said Alexander Mankin, distinguished professor of pharmaceutical sciences at UIC. “If the antibiotic hits both targets at the same concentration, then the bacteria lose their ability to become resistant via acquisition of random mutations in any of the two targets.”

[…]

More information: Elena V. Aleksandrova et al, Macrolones target bacterial ribosomes and DNA gyrase and can evade resistance mechanisms, Nature Chemical Biology (2024). DOI: 10.1038/s41589-024-01685-3

Source: Dual action antibiotic could make bacterial resistance nearly impossible

“Smart soil” grows 138% bigger crops using 40% less water

[…]

in areas where water is more scarce it can be hard to grow crops and feed populations, so scientists are investigating ways to boost efficiency.

Building on earlier work, the new study marks a good step in that direction. The soil gets its “smart” moniker thanks to the addition of a specially formulated hydrogel, which works to absorb more water vapor from the air overnight, then releasing it to the plants’ roots during the day. Incorporating calcium chloride into the hydrogel also provides a slow release of this vital nutrient.

A diagram of how the hydrogel works to improve the growth of crops
A diagram of how the hydrogel works to improve the growth of crops
University of Texas at Austin

The team tested the new smart soil in lab experiments, growing plants in 10 grams of soil, with some including 0.1 g of hydrogel. A day/night cycle was simulated, with 12 hours of darkness at 25 °C (77 °F) and either 60% or 90% relative humidity, followed by 12 hours of simulated sunlight at 35 °C (95 °F) and 30% humidity.

Sure enough, plants growing in the hydrogel soil showed a 138% boost to their stem length, compared to the control group. Importantly, the hydrogel-grown plants achieved this even while requiring 40% less direct watering.

[…]

The research was published in the journal ACS Materials Letters.

Source: University of Texas at Austin

Source: “Smart soil” grows 138% bigger crops using 40% less water

Scientific articles using ‘sneaked references’ to inflate their citation numbers

[…] A recent Journal of the Association for Information Science and Technology article by our team of academic sleuths – which includes information scientists, a computer scientist and a mathematician – has revealed an insidious method to artificially inflate citation counts through metadata manipulations: sneaked references.

Hidden manipulation

People are becoming more aware of scientific publications and how they work, including their potential flaws. Just last year more than 10,000 scientific articles were retracted. The issues around citation gaming and the harm it causes the scientific community, including damaging its credibility, are well documented.

[…]

we found through a chance encounter that some unscrupulous actors have added extra references, invisible in the text but present in the articles’ metadata, when they submitted the articles to scientific databases. The result? Citation counts for certain researchers or journals have skyrocketed, even though these references were not cited by the authors in their articles.

Chance discovery

The investigation began when Guillaume Cabanac, a professor at the University of Toulouse, wrote a post on PubPeer, a website dedicated to postpublication peer review, in which scientists discuss and analyze publications. In the post, he detailed how he had noticed an inconsistency: a Hindawi journal article that he suspected was fraudulent because it contained awkward phrases had far more citations than downloads, which is very unusual.

The post caught the attention of several sleuths who are now the authors of the JASIST article. We used a scientific search engine to look for articles citing the initial article. Google Scholar found none, but Crossref and Dimensions did find references. The difference? Google Scholar is likely to mostly rely on the article’s main text to extract the references appearing in the bibliography section, whereas Crossref and Dimensions use metadata provided by publishers.

[…]

In the journals published by Technoscience Academy, at least 9% of recorded references were “sneaked references.” These additional references were only in the metadata, distorting citation counts and giving certain authors an unfair advantage. Some legitimate references were also lost, meaning they were not present in the metadata.

In addition, when analyzing the sneaked references, we found that they highly benefited some researchers. For example, a single researcher who was associated with Technoscience Academy benefited from more than 3,000 additional illegitimate citations. Some journals from the same publisher benefited from a couple hundred additional sneaked citations.

[…]

Why is this discovery important? Citation counts heavily influence research funding, academic promotions and institutional rankings. Manipulating citations can lead to unjust decisions based on false data. More worryingly, this discovery raises questions about the integrity of scientific impact measurement systems, a concern that has been highlighted by researchers for years. These systems can be manipulated to foster unhealthy competition among researchers, tempting them to take shortcuts to publish faster or achieve more citations.

[…]

Source: When scientific citations go rogue: Uncovering ‘sneaked references’

We finally know why some people seem immune to catching covid-19

Deliberately exposing people to the coronavirus behind covid-19 in a so-called challenge study has helped us understand why some people seem to be immune to catching the infection.

As part of the first such covid-19 study, carried out in 2021, a group of international researchers looked at 16 people with no known health conditions who had neither tested positive for the SARS-CoV-2 virus nor been vaccinated against it.

The original variant of SARS-CoV-2 was sprayed up their noses. Nasal and blood samples were taken before this exposure and then six to seven times over the 28 days after. They also had SARS-CoV-2 tests twice a day.

[…]

In total, the researchers looked at more than 600,000 blood and nasal cells across all the individuals.

They found that in the second and third groups, the participants produced interferon – a substance that helps the immune system fight infections – in their blood before it was produced in their nasopharynx, the upper part of the nose behind the throat where the nasal samples were taken from. The interferon response, when it did occur in the nasopharynx, was actually higher in the noses of those in the second group than the third, says Teichmann.

These groups also didn’t have active infections within their T-cells and macrophages, which are both types of immune cell, says team member Marko Nikolic at University College London.

The results suggest that high levels of activity of an immune system gene called HLA-DQA2 before SARS-CoV-2 exposure helped prevent a sustained infection.

[…]

However, most people have now been exposed to “a veritable mosaic of SARS-CoV-2 variants”, rather than just the ancestral variant used in this study. The results may therefore not reflect cell responses outside of a trial setting, he says.

 

Journal reference:

Nature DOI: 10.1038/s41586-024-07575-x

 

Source: We finally know why some people seem immune to catching covid-19 | New Scientist

Seven types of microplastics found in the human penises, raises questions about sexual function

The proliferation of microplastics (MPs) represents a burgeoning environmental and health crisis. Measuring less than 5 mm in
diameter, MPs have inltrated atmospheric, freshwater, and terrestrial ecosystems, penetrating commonplace consumables like
seafood, sea salt, and bottled beverages. Their size and surface area render them susceptible to chemical interactions with
physiological uids and tissues, raising bioaccumulation and toxicity concerns. Human exposure to MPs occurs through ingestion,
inhalation, and dermal contact. To date, there is no direct evidence identifying MPs in penile tissue. The objective of this study was
to assess for potential aggregation of MPs in penile tissue. Tissue samples were extracted from six individuals who underwent
surgery for a multi-component inatable penile prosthesis (IPP).
[…]
Seven
types of MPs were found in the penile tissue, with polyethylene terephthalate (47.8%) and polypropylene (34.7%) being the most
prevalent. The detection of MPs in penile tissue raises inquiries on the ramications of environmental pollutants on sexual health.
Our research adds a key dimension to the discussion on man-made pollutants, focusing on MPs in the male reproductive system.
IJIR: Your Sexual Medicine Journal; https://doi.org/10.1038/s41443-024-00930-6

Source: Detection of microplastics in the human penis | International Journal of Impotence Research

Mathematicians find odd shapes that roll like a wheel in any dimension

Mathematicians have reinvented the wheel with the discovery of shapes that can roll smoothly when sandwiched between two surfaces, even in four, five or any higher number of spatial dimensions. The finding answers a question that researchers have been puzzling over for decades.

Such objects are known as shapes of constant width, and the most familiar in two and three dimensions are the circle and the sphere. These aren’t the only such shapes, however. One example is the Reuleaux triangle, which is a triangle with curved edges, while people in the UK are used to handling equilateral curve heptagons, otherwise known as the shape of the 20 and 50 pence coins. In this case, being of constant width allows them to roll inside coin-operated machines and be recognised regardless of their orientation.

[…]

While shapes with more than three dimensions are impossible to visualise, mathematicians can define them by extending 2D and 3D shapes in logical ways. For example, just as a circle or a sphere is the set of points that sits at a constant distance from a central point, the same is true in higher dimensions. “Sometimes the most fascinating phenomena are discovered when you look at higher and higher dimensions,” says Gil Kalai at the Hebrew University of Jerusalem in Israel.

Now, Andrii Arman at the University of Manitoba in Canada and his colleagues have answered Schramm’s question and found a set of constant-width shapes, in any dimension, that are indeed smaller than an equivalent dimensional sphere.

[…]

The first part of the proof involves considering a sphere with n dimensions and then dividing it into 2n equal parts – so four parts for a circle, eight for a 3D sphere, 16 for a 4D sphere and so on. The researchers then mathematically stretch and squeeze these segments to alter their shape without changing their width. “The recipe is very simple, but we understood that only after all of our elaboration,” says team member Andriy Bondarenko at the Norwegian University of Science and Technology.

The team proved that it is always possible to do this distortion in such a way that you end up with a shape that has a volume at most 0.9n times that of the equivalent dimensional sphere. This means that as you move to higher and higher dimensions, the shape of constant width gets proportionally smaller and smaller compared with the sphere.

Visualising this is difficult, but one trick is to imagine the lower-dimensional silhouette of a higher-dimensional object. When viewed at certain angles, the 3D shape appears as a 2D Reuleaux triangle (see the middle image above). In the same way, the 3D shape can be seen as a “shadow” of the 4D one, and so on.  “The shapes in higher dimensions will be in a certain sense similar, but will grow in complexity as [the] dimension grows,” says Arman.

Having identified these shapes, mathematicians now hope to study them further. “Even with the new result, which takes away some of the mystery about them, they are very mysterious sets in high dimensions,” says Kalai.

 

Source: Mathematicians find odd shapes that roll like a wheel in any dimension | New Scientist

What’s Actually In Tattoo Ink? No One Really Knows

Nearly a third of U.S. adults have tattoos, so plenty of you listeners can probably rattle off the basic guidelines of tattoo safety: Make sure you go to a reputable tattoo artist who uses new, sterile needles. Stay out of the ocean while you’re healing so you don’t pick up a smidgen of flesh-eating bacteria. Gently wash your new ink with soap and water, avoid sun exposure and frequently apply an unscented moisturizer—easy-peasy.

But body art enthusiasts might face potential risks from a source they don’t expect: tattoo inks themselves. Up until relatively recently tattoo inks in the U.S. were totally unregulated. In 2022 the federal government pulled tattoo inks under the regulatory umbrella of cosmetics, which means the Food and Drug Administration can oversee these products. But now researchers are finding that many commercial inks contain ingredients they’re not supposed to. Some of these additives are simply compounds that should be listed on the packaging and aren’t. But others could pose a risk to consumers.

For Science Quickly, I’m Rachel Feltman. I’m joined today by John Swierk, an assistant professor of chemistry at Binghamton University, State University of New York. His team is trying to figure out exactly what goes into each vial of tattoo ink—and how tattoos actually work in the first place—to help make body art safer, longer-lasting and maybe even cooler.

[…]

one of the areas we got really interested in was trying to understand why light causes tattoos to fade. This is a huge question when you think about something with laser tattoo removal, where you’re talking about an industry on the scale of $1 billion a year.

And it turns out we really don’t understand that process. And so starting to look at how the tattoo pigments change when you expose them to light, what that might be doing in the skin, then led us to a lot of other questions about tattoos that we realized weren’t well understood—even something as simple as what’s actually in tattoo ink.

[…]

recently we’ve been looking at commercial tattoo inks and sort of surprised to find that in the overwhelming majority of them, we’re seeing things that are not listed as part of the ingredients….Now that doesn’t necessarily mean the things that are in these inks are unsafe, but it does cause a huge problem if you want to try to understand something about the safety of tattoos.

[…]

I think most people would agree that it would be great to know that tattoo inks are safe [and] being made safely, you know? And of course, that’s not unique to tattoo inks; cosmetics and supplements have a lot of similar problems that we need to work on.

But, if we’re going to get a better grasp on the chemistry and even the immunology of tattoos, that’s not just going to help us make them safer but, you know, potentially improve healing, appearance, longevity.

I mean, I think about that start-up that promised “ephemeral tattoos” that now folks a few years later are coming out and saying, “These tattoos have not gone away,” and thinking about how much potential there is for genuine innovation if we can start to answer some of these questions.

[…]

we can start to think about designing new pigments that might have better colorfastness, less reactivity, less sort of bleeding of the lines, right, over time. But all of those things can only happen if we actually understand tattoos, and we really just don’t understand them that well at the moment.

[…]

We looked at 54 inks, and of the 54, 45 had what we consider to be major discrepancies—so these were either unlisted pigments, unlisted additives.

And that was really concerning to us, right? You’re talking about inks coming from major, global, industry-leading manufacturers all the way down to smaller, more niche inks—that there were problems across the board.

So we found things like incorrect pigments being listed. We found issues of some major allergens being used—these aren’t necessarily compounds that are specifically toxic, but to some people they can generate a really pronounced allergic response.

And a couple of things: we found an antibiotic that’s most commonly used for urinary tract infections.

We found a preservative that the FDA has cautioned nursing mothers against, you know, having exposure to—so things that at a minimum, need to be disclosed so that consumers could make informed choices.

[…]

if somebody’s thinking about getting a tattoo, they should be working with an artist who is experienced, who has apprenticed under experienced artists, who is really following best practices in terms of sanitation, aftercare, things like that. That’s where we know you can have a problem. Beyond that, I think it’s a matter of how comfortable you are with some degree of risk.

The point I always really want to emphasize is that, you know, our work isn’t saying anything about whether tattoos are safe or not.

It’s the first step in that process. Just because we found some stuff in the inks doesn’t mean that you shouldn’t get a tattoo or that you have a new risk for skin cancer or something like that…. it’s that this is the process of how science grows, right—that we have to start understanding the basics and the fundamentals so that we can build the next questions on top of that.

And our understanding of tattoos in the body is still at such an early level that we don’t really even understand what the risk factors would be, “What should we be looking for?”

So I think it’s like with anything in life: if you’re comfortable with a degree of risk, then, yeah, go ahead and get the tattoo. People get tattoos for lots of reasons that are important and meaningful and very impactful in a positive way in their life. And I think a concern over a hypothetical risk is probably not worth the potential positives of getting a tattoo.

We know that light exposure— particularly the sunlight—is not great for the tattoo, and if we have concerns about long-term pigment breakdown, ultraviolet light is probably going to enhance that, so keeping your tattoo covered, using sunscreen when you can’t keep it covered—that’s probably very important. If you’re really concerned about the risk, we can think about the size of the tattoo. So somebody with a relatively small piece of line art on their back is in a very different potential risk category than somebody who is fully sleeved and, you know, covered from, say, neck to ankle in tattoos.

And again we’re not saying that either those people have a significant risk that they need to be worried about, but if somebody is concerned, the person with the small line art on the back is much less likely to have to worry about the risk than somebody with a huge tattoo.

We also know that certain colors, like yellow in particular, fade much more readily. That suggests that those pigments are interacting with the body a lot more.

Staying away from bright colors and focusing on black inks might be a more prudent option there, but again, right, a lot of these are hypothetical and we don’t want to alarm people or scare them.

[…]

We’re also still working on understanding what tattoo pigments break down into.

We really don’t understand a lot about laser tattoo removal, and if there is some aspect of tattooing that gives me pause, it’s probably that part. It’s a very reasonable concern, I think, that you may have pigments that are entirely safe in the skin, but once you start zapping them with high-powered lasers, we don’t know what you do to the chemistry, and so that could change the dynamic a lot. And so we’re trying to figure out how to do that and, I think, making some progress there. And then the last area—which is, is new to us but kind of fun—is actually just looking at the biomechanics of tattooing. You would think that we’d really understand how the ink goes into the skin, how it stays in the skin, but the picture there is a little bit hazy

[…]

One of the interesting things, when you talk to ink manufacturers and artists, is that they sort of have this intuitive feel for … sort of what the viscosity of the ink should be like and how much pigment is in there but can’t necessarily articulate why a particular viscosity is good or why a particular pigment loading is good. And so we think if we understand something about the process by which the ink goes…and so we think understanding the biomechanics could really open some interesting possibilities and lead to better, more interesting tattoos down the road as well.

[…]

Source: What’s Actually In Tattoo Ink? No One Really Knows | Scientific American

“Deny, denounce, delay”: ultra-processed food companies fighting using big tobacco type tactics

When the Brazilian nutritional scientist Carlos Monteiro coined the term “ultra-processed foods” 15 years ago, he established what he calls a “new paradigm” for assessing the impact of diet on health.

Monteiro had noticed that although Brazilian households were spending less on sugar and oil, obesity rates were going up. The paradox could be explained by increased consumption of food that had undergone high levels of processing, such as the addition of preservatives and flavorings or the removal or addition of nutrients.

But health authorities and food companies resisted the link, Monteiro tells the FT. “[These are] people who spent their whole life thinking that the only link between diet and health is the nutrient content of foods … Food is more than nutrients.”

Monteiro’s food classification system, “Nova,” assessed not only the nutritional content of foods but also the processes they undergo before reaching our plates. The system laid the groundwork for two decades of scientific research linking the consumption of UPFs to obesity, cancer, and diabetes.

Studies of UPFs show that these processes create food—from snack bars to breakfast cereals to ready meals—that encourages overeating but may leave the eater undernourished. A recipe might, for example, contain a level of carbohydrate and fat that triggers the brain’s reward system, meaning you have to consume more to sustain the pleasure of eating it.

In 2019, American metabolic scientist Kevin Hall carried out a randomized study comparing people who ate an unprocessed diet with those who followed a UPF diet over two weeks. Hall found that the subjects who ate the ultra-processed diet consumed around 500 more calories per day, more fat and carbohydrates, less protein—and gained weight.

The rising concern about the health impact of UPFs has recast the debate around food and public health, giving rise to books, policy campaigns, and academic papers. It also presents the most concrete challenge yet to the business model of the food industry, for whom UPFs are extremely profitable.

The industry has responded with a ferocious campaign against regulation. In part it has used the same lobbying playbook as its fight against labeling and taxation of “junk food” high in calories: big spending to influence policymakers.

FT analysis of US lobbying data from non-profit Open Secrets found that food and soft drinks-related companies spent $106 million on lobbying in 2023, almost twice as much as the tobacco and alcohol industries combined. Last year’s spend was 21 percent higher than in 2020, with the increase driven largely by lobbying relating to food processing as well as sugar.

In an echo of tactics employed by cigarette companies, the food industry has also attempted to stave off regulation by casting doubt on the research of scientists like Monteiro.

“The strategy I see the food industry using is deny, denounce, and delay,” says Barry Smith, director of the Institute of Philosophy at the University of London and a consultant for companies on the multisensory experience of food and drink.

So far the strategy has proved successful. Just a handful of countries, including Belgium, Israel, and Brazil, currently refer to UPFs in their dietary guidelines. But as the weight of evidence about UPFs grows, public health experts say the only question now is how, if at all, it is translated into regulation.

“There’s scientific agreement on the science,” says Jean Adams, professor of dietary public health at the MRC Epidemiology Unit at the University of Cambridge. “It’s how to interpret that to make a policy that people aren’t sure of.”

[…]

Source: “Deny, denounce, delay”: The battle over the risk of ultra-processed foods | Ars Technica

Lawyers To Plastic Makers: Prepare For ‘Astronomical’ PFAS Lawsuits

An anonymous reader quotes a report from the New York Times: The defense lawyer minced no words as he addressed a room full of plastic-industry executives. Prepare for a wave of lawsuits with potentially “astronomical” costs. Speaking at a conference earlier this year, the lawyer, Brian Gross, said the coming litigation could “dwarf anything related to asbestos,” one of the most sprawling corporate-liability battles in United States history. Mr. Gross was referring to PFAS, the “forever chemicals” that have emerged as one of the major pollution issues of our time. Used for decades in countless everyday objects — cosmetics, takeout containers, frying pans — PFAS have been linked to serious health risks including cancer. Last month the federal government said several types of PFAS must be removed from the drinking water of hundreds of millions of Americans. “Do what you can, while you can, before you get sued,” Mr. Gross said at the February session, according to a recording of the event made by a participant and examined by The New York Times. “Review any marketing materials or other communications that you’ve had with your customers, with your suppliers, see whether there’s anything in those documents that’s problematic to your defense,” he said. “Weed out people and find the right witness to represent your company.”

A wide swath of the chemicals, plastics and related industries are gearing up to fight a surge in litigation related to PFAS, or per- and polyfluoroalkyl substances, a class of nearly 15,000 versatile synthetic chemicals linked to serious health problems. […] PFAS-related lawsuits have already targeted manufacturers in the United States, including DuPont, its spinoff Chemours, and 3M. Last year, 3M agreed to pay at least $10 billion to water utilities across the United States that had sought compensation for cleanup costs. Thirty state attorneys general have also sued PFAS manufacturers, accusing the manufacturers of widespread contamination. But experts say the legal battle is just beginning. Under increasing scrutiny are a wider universe of companies that use PFAS in their products. This month, plaintiffs filed a class-action lawsuit against Bic, accusing the razor company for failing to disclose that some of its razors contained PFAS. Bic said it doesn’t comment on pending litigation, and said it had a longstanding commitment to safety.

The Biden administration has moved to regulate the chemicals, for the first time requiring municipal water systems to remove six types of PFAS. Last month, the Environmental Protection Agency also designated two of those PFAS chemicals as hazardous substances under the Superfund law, shifting responsibility for their cleanup at contaminated sites from taxpayers to polluters. Both rules are expected to prompt a new round of litigation from water utilities, local communities and others suing for cleanup costs. “To say that the floodgates are opening is an understatement,” said Emily M. Lamond, an attorney who focuses on environmental litigation at the law firm Cole Schotz. “Take tobacco, asbestos, MTBE, combine them, and I think we’re still going to see more PFAS-related litigation,” she said, referring to methyl tert-butyl ether, a former harmful gasoline additive that contaminated drinking water. Together, the trio led to claims totaling hundreds of billions of dollars.
Unlike tobacco, used by only a subset of the public, “pretty much every one of us in the United States is walking around with PFAS in our bodies,” said Erik Olson, senior strategic director for environmental health at the Natural Resources Defense Council. “And we’re being exposed without our knowledge or consent, often by industries that knew how dangerous the chemicals were, and failed to disclose that,” he said. “That’s a formula for really significant liability.”

Bilingual Brain-Reading Implant Decodes Spanish and English

For the first time, a brain implant has helped a bilingual person who is unable to articulate words to communicate in both of his languages. An artificial-intelligence (AI) system coupled to the brain implant decodes, in real time, what the individual is trying to say in either Spanish or English.

The findings, published on 20 May in Nature Biomedical Engineering, provide insights into how our brains process language, and could one day lead to long-lasting devices capable of restoring multilingual speech to people who can’t communicate verbally.

[…]

The person at the heart of the study, who goes by the nickname Pancho, had a stroke at age 20 that paralysed much of his body. As a result, he can moan and grunt but cannot speak clearly.

[…]

the team developed an AI system to decipher Pancho’s bilingual speech. This effort, led by Chang’s PhD student Alexander Silva, involved training the system as Pancho tried to say nearly 200 words. His efforts to form each word created a distinct neural pattern that was recorded by the electrodes.

The authors then applied their AI system, which has a Spanish module and an English one, to phrases as Pancho tried to say them aloud. For the first word in a phrase, the Spanish module chooses the Spanish word that matches the neural pattern best. The English component does the same, but chooses from the English vocabulary instead. For example, the English module might choose ‘she’ as the most likely first word in a phrase and assess its probability of being correct to be 70%, whereas the Spanish one might choose ‘estar’ (to be) and measure its probability of being correct at 40%.

[…]

From there, both modules attempt to build a phrase. They each choose the second word based on not only the neural-pattern match but also whether it is likely to follow the first one. So ‘I am’ would get a higher probability score than ‘I not’. The final output produces two sentences — one in English and one in Spanish — but the display screen that Pancho faces shows only the version with the highest total probability score.

The modules were able to distinguish between English and Spanish on the basis of the first word with 88% accuracy and they decoded the correct sentence with an accuracy of 75%.

[…]

The findings revealed unexpected aspects of language processing in the brain. Some previous experiments using non-invasive tools have suggested that different languages activate distinct parts of the brain. But the authors’ examination of the signals recorded directly in the cortex found that “a lot of the activity for both Spanish and English was actually from the same area”, Silva says.

Furthermore, Pancho’s neurological responses didn’t seem to differ much from those of children who grew up bilingual, even though he was in his thirties when he learnt English — in contrast to the results of previous studies. Together, these findings suggest to Silva that different languages share at least some neurological features, and that they might be generalizable to other people.

[…]

Source: Bilingual Brain-Reading Implant Decodes Spanish and English | Scientific American

Device Decodes ‘Internal Speech’ in the Brain

Scientists have developed brain implants that can decode internal speech — identifying words that two people spoke in their minds without moving their lips or making a sound.

Although the technology is at an early stage — it was shown to work with only a handful of words, and not phrases or sentences — it could have clinical applications in future.

Similar brain–computer interface (BCI) devices, which translate signals in the brain into text, have reached speeds of 62–78 words per minute for some people. But these technologies were trained to interpret speech that is at least partly vocalized or mimed.

The latest study — published in Nature Human Behaviour on 13 May — is the first to decode words spoken entirely internally, by recording signals from individual neurons in the brain in real time.

[…]

The researchers implanted arrays of tiny electrodes in the brains of two people with spinal-cord injuries. They placed the devices in the supramarginal gyrus (SMG), a region of the brain that had not been previously explored in speech-decoding BCIs.

Figuring out the best places in the brain to implant BCIs is one of the key challenges for decoding internal speech

[…]

wo weeks after the participants were implanted with microelectrode arrays in their left SMG, the researchers began collecting data. They trained the BCI on six words (battlefield, cowboy, python, spoon, swimming and telephone) and two meaningless pseudowords (nifzig and bindip). “The point here was to see if meaning was necessary for representation,” says Wandelt.

Over three days, the team asked each participant to imagine speaking the words shown on a screen and repeated this process several times for each word. The BCI then combined measurements of the participants’ brain activity with a computer model to predict their internal speech in real time.

For the first participant, the BCI captured distinct neural signals for all of the words and was able to identify them with 79% accuracy. But the decoding accuracy was only 23% for the second participant, who showed preferential representation for ‘spoon’ and ‘swimming’ and had fewer neurons that were uniquely active for each word. “It’s possible that different sub-areas in the supramarginal gyrus are more, or less, involved in the process,” says Wandelt.

Christian Herff, a computational neuroscientist at Maastricht University in the Netherlands, thinks these results might highlight the different ways in which people process internal speech. “Previous studies showed that there are different abilities in performing the imagined task and also different BCI control abilities,” adds Marchesotti.

The authors also found that 82–85% of neurons that were active during internal speech were also active when the participants vocalized the words. But some neurons were active only during internal speech, or responded differently to specific words in the different tasks.

[…]

Source: Device Decodes ‘Internal Speech’ in the Brain | Scientific American

Gene therapy relieves back pain, repairs damaged disc in mice

Disc-related back pain may one day meet its therapeutic match: gene therapy delivered by naturally derived nanocarriers that, a new study shows, repairs damaged discs in the spine and lowers pain symptoms in mice.

Scientists engineered nanocarriers using mouse connective-tissue cells called fibroblasts as a model of skin cells and loaded them with genetic material for a protein key to tissue development. The team injected a solution containing the carriers into damaged discs in mice at the same time the back injury occurred.

Assessing outcomes over 12 weeks, researchers found through imaging, tissue analysis, and mechanical and behavioral tests that the gene therapy restored structural integrity and function to degenerated discs and reduced signs of back pain in the animals.

[…]

“This can be used at the same time as surgery to actually boost healing of the disc itself,” said co-senior author Natalia Higuita-Castro, associate professor of biomedical engineering and neurological surgery at Ohio State. “Your own cells are actually doing the work and going back to a healthy state.”

The study was published online recently in the journal Biomaterials.

An estimated 40% of low-back pain cases are attributed to degeneration of the cushiony intervertebral discs that absorb shocks and provide flexibility to the spine, previous research suggests. And while trimming away bulging tissue from a herniated disc during surgery typically reduces pain, it does not repair the disc itself — which continues to degenerate with the passage of time.

[…]

This new study builds upon previous work in Higuita-Castro’s lab, which reported a year ago that nanocarriers called extracellular vesicles loaded with anti-inflammatory cargo curbed tissue injury in damaged mouse lungs. The engineered carriers are replicas of the natural extracellular vesicles that circulate in humans’ bloodstream and biological fluids, carrying messages between cells.

To create the vesicles, scientists apply an electrical charge to a donor cell to transiently open holes in its membrane, and deliver externally obtained DNA inside that converts to a specific protein, as well as molecules that prompt the manufacture of even more of a functional protein.

In this study, the cargo consisted of material to produce a “pioneer” transcription factor protein called FOXF1, which is important in the development and growth of tissues.

[…]

Compared to controls, the discs in mice receiving gene therapy showed a host of improvements: The tissue plumped back up and became more stable through production of a protein that holds water and other matrix proteins, all helping promote range of motion, load bearing and flexibility in the spine. Behavioral tests showed the therapy decreased symptoms of pain in mice, though these responses differed by sex — males and females showed varying levels of susceptibility to pain based on the types of movement being assessed.

The findings speak to the value of using universal adult donor cells to create these extracellular vesicle therapies, the researchers said, because they don’t carry the risk of generating an immune response. The gene therapy also, ideally, would function as a one-time treatment — a therapeutic gift that keeps on giving.

[…]

There are more experiments to come, testing the effects of other transcription factors that contribute to intervertebral disc development. And because this first study used young adult mice, the team also plans to test the therapy’s effects in older animals that model age-related degeneration and, eventually, in clinical trials for larger animals known to develop back problems.

[…]

Story Source:

Materials provided by Ohio State University. Original written by Emily Caldwell. Note: Content may be edited for style and length.


Journal Reference:

  1. Shirley N. Tang, Ana I. Salazar-Puerta, Mary K. Heimann, Kyle Kuchynsky, María A. Rincon-Benavides, Mia Kordowski, Gilian Gunsch, Lucy Bodine, Khady Diop, Connor Gantt, Safdar Khan, Anna Bratasz, Olga Kokiko-Cochran, Julie Fitzgerald, Damien M. Laudier, Judith A. Hoyland, Benjamin A. Walter, Natalia Higuita-Castro, Devina Purmessur. Engineered extracellular vesicle-based gene therapy for the treatment of discogenic back pain. Biomaterials, 2024; 308: 122562 DOI: 10.1016/j.biomaterials.2024.122562

Source: Gene therapy relieves back pain, repairs damaged disc in mice | ScienceDaily

Flood of Fake Science Forces Multiple Journal Closures

Fake studies have flooded the publishers of top scientific journals, leading to thousands of retractions and millions of dollars in lost revenue. The biggest hit has come to Wiley, a 217-year-old publisher based in Hoboken, N.J., which Tuesday will announce that it is closing 19 journals, some of which were infected by large-scale research fraud.
In the past two years, Wiley has retracted more than 11,300 papers that appeared compromised, according to a spokesperson, and closed four journals. It isn’t alone: At least two other publishers have retracted hundreds of suspect papers each. Several others have pulled smaller clusters of bad papers.
Although this large-scale fraud represents a small percentage of submissions to journals, it threatens the legitimacy of the nearly $30 billion academic publishing industry and the credibility of science as a whole.
The discovery of nearly 900 fraudulent papers in 2022 at IOP Publishing, a physical sciences publisher, was a turning point for the nonprofit. “That really crystallized for us, everybody internally, everybody involved with the business,” said Kim Eggleton, head of peer review and research integrity at the publisher. “This is a real threat.”

Wiley will announce that it is closing 19 journals. Photo: Wiley

The sources of the fake science are “paper mills”—businesses or individuals that, for a price, will list a scientist as an author of a wholly or partially fabricated paper. The mill then submits the work, generally avoiding the most prestigious journals in favor of publications such as one-off special editions that might not undergo as thorough a review and where they have a better chance of getting bogus work published.
World-over, scientists are under pressure to publish in peer-reviewed journals—sometimes to win grants, other times as conditions for promotions. Researchers say this motivates people to cheat the system. Many journals charge a fee to authors to publish in them.
Problematic papers typically appear in batches of up to hundreds or even thousands within a publisher or journal. A signature move is to submit the same paper to multiple journals at once to maximize the chance of getting in, according to an industry trade group now monitoring the problem. Publishers say some fraudsters have even posed as academics to secure spots as guest editors for special issues and organizers of conferences, and then control the papers that are published there.
“The paper mill will find the weakest link and then exploit it mercilessly until someone notices,” said Nick Wise, an engineer who has documented paper-mill advertisements on social media and posts examples regularly on X under the handle @author_for_sale.
The journal Science flagged the practice of buying authorship in 2013. The website Retraction Watch and independent researchers have since tracked paper mills through their advertisements and websites. Researchers say they have found them in multiple countries including Russia, Iran, Latvia, China and India. The mills solicit clients on social channels such as Telegram or Facebook, where they advertise the titles of studies they intend to submit, their fee and sometimes the journal they aim to infiltrate. Wise said he has seen costs ranging from as little as $50 to as much as $8,500.
When publishers become alert to the work, mills change their tactics.
[…]
For Wiley, which publishes more than 2,000 journals, the problem came to light two years ago, shortly after it paid nearly $300 million for Hindawi, a company founded in Egypt in 1997 that included about 250 journals. In 2022, a little more than a year after the purchase, scientists online noticed peculiarities in dozens of studies from journals in the Hindawi family.
Scientific papers typically include citations that acknowledge work that informed the research, but the suspect papers included lists of irrelevant references. Multiple papers included technical-sounding passages inserted midway through, what Bishop called an “AI gobbledygook sandwich.” Nearly identical contact emails in one cluster of studies were all registered to a university in China where few if any of the authors were based. It appeared that all came from the same source.
[…]
The extent of the paper mill problem has been exposed by members of the scientific community who on their own have collected patterns in faked papers to recognize this fraud at scale and developed tools to help surface the work.
One of those tools, the “Problematic Paper Screener,” run by Guillaume Cabanac, a computer-science researcher who studies scholarly publishing at the Université Toulouse III-Paul Sabatier in France, scans the breadth of the published literature, some 130 million papers, looking for a range of red flags including “tortured phrases.”
Cabanac and his colleagues realized that researchers who wanted to avoid plagiarism detectors had swapped out key scientific terms for synonyms from automatic text generators, leading to comically misfit phrases. “Breast cancer” became “bosom peril”; “fluid dynamics” became “gooey stream”; “artificial intelligence” became “counterfeit consciousness.” The tool is publicly available.
Another data scientist, Adam Day, built “The Papermill Alarm,” a tool that uses large language models to spot signs of trouble in an article’s metadata, such as multiple suspect papers citing each other or using similar templates and simply altering minor experimental details. Publishers can pay to use the tool.
[…]
The incursion of paper mills has also forced competing publishers to collaborate. A tool launched through STM, the trade group of publishers, now checks whether new submissions were submitted to multiple journals at once, according to Joris van Rossum, product director who leads the “STM Integrity Hub,” launched in part to beat back paper mills. Last fall, STM added Day’s “The Papermill Alarm” to its suite of tools.
While publishers are fighting back with technology, paper mills are using the same kind of tools to stay ahead.
“Generative AI has just handed them a winning lottery ticket,” Eggleton of IOP Publishing said. “They can do it really cheap, at scale, and the detection methods are not where we need them to be. I can only see that challenge increasing.”

Source: Flood of Fake Science Forces Multiple Journal Closures – WSJ

Long covid linked to signs of ongoing inflammatory responses in blood

People who develop long covid after being hospitalised with severe covid-19 have raised levels of many inflammatory immune molecules compared with those who recovered fully after such a hospitalisation, according to a study of nearly 700 people.

The findings show that long covid has a real biological basis, says team member Peter Openshaw at Imperial College London. “People are not imagining it,” he says. “It’s genuinely happening to them.”

[…]

The study by Liew and her colleagues involved measuring the levels of 368 immune molecules in the blood of 659 people who were hospitalised with covid-19, mostly early on in the pandemic. The 426 people who were still reporting symptoms more than three months later were compared with the 233 who reported being fully recovered.

The study found that the patterns of immune activation reflected the main kinds of symptoms people with long covid reported. The five main symptom types were fatigue; cognitive impairment; anxiety and depression; cardiorespiratory symptoms; and gastrointestinal symptoms.

For instance, people with gastrointestinal symptoms had higher blood levels of SCG3, a signalling protein that is also elevated in the faeces of people with irritable bowel syndrome.

The findings won’t help with diagnosing whether people have long covid or not, says team member Chris Brightling at the University of Leicester in the UK. But once the condition has been diagnosed, testing for these molecules could help reveal what kind of long covid people have, and thus what kind of interventions might help, he says.

A study last year estimated that 36 million people in Europe had or have long covid. “Many people are still suffering,” says Brightling.

[…]

Journal reference:

Nature Immunology DOI: 10.1038/s41590-024-01778-0

Source: Long covid linked to signs of ongoing inflammatory responses in blood | New Scientist

Rapid biodegradation of microplastics generated from bio-based thermoplastic polyurethane in compost

Accumulation of microplastics in the natural environment is ultimately due to the chemical nature of widely used petroleum-based plastic polymers, which typically are inaccessible to biological processing. One way to mitigate this crisis is adoption of plastics that biodegrade if released into natural environments. In this work, we generated microplastic particles from a bio-based, biodegradable thermoplastic polyurethane (TPU-FC1) and demonstrated their rapid biodegradation via direct visualization and respirometry. Furthermore, we isolated multiple bacterial strains capable of using TPU-FC1 as a sole carbon source and characterized their depolymerization products. To visualize biodegradation of TPU materials as real-world products, we generated TPU-coated cotton fabric and an injection molded phone case and documented biodegradation by direct visualization and scanning electron microscopy (SEM), both of which indicated clear structural degradation of these materials and significant biofilm formation.

Source: Rapid biodegradation of microplastics generated from bio-based thermoplastic polyurethane | Scientific Reports

Conclusion

In this work, particle count and respirometry experiments demonstrated that microplastic particles from a bio-based thermoplastic polyurethane can rapidly biodegrade and therefore are transiently present in the environment. In contrast, microplastic particles from a widely used commercial thermoplastic, ethyl vinyl acetate, persists in the environment and showed no significant signs of biodegradation over the course of this experiment. Bacteria capable of utilizing TPU-FC1 as a carbon source were isolated and depolymerization of the material was confirmed by the early accumulation of monomers derived from the original polymer, which are metabolized by microbes in short order. Finally, we demonstrated that prototype products made from these materials biodegrade under home compost conditions. The generation of microplastics is an unavoidable consequence of plastic usage and mitigating the persistence of these particles by adoption of biodegradable material alternatives is a viable option for a future green circular economy.

The flow of air over Airfoils, or how planes fly

In this article we’ll investigate what makes airplanes fly by looking at the forces generated by the flow of air around the aircraft’s wings. More specifically, we’ll focus on the cross section of those wings to reveal the shape of an airfoil – you can see it presented in yellow below:

a wing with air flow and pressure showing as well as a selected angle of attack

We’ll find out how the shape and the orientation of the airfoil helps airplanes remain airborne. We’ll also learn about the behavior and properties of air and other flowing matter.

Source: Airfoil – Bartosz Ciechanowski

The article goes very deeply into how air flow works and is modelled, how velocity and pressure affect vectors, the shape of an airfoil, the boundry layer and the angle of attack. It requires a bit of scrolling before you get to the planes, but it’s mesmerising to play with the sliders.

Dog DNA testing company identifies human as dog

On Wednesday, WBZ News reported its investigations team receiving dog breed results from the company DNA My Dog after one of its reporters sent in a swab sample – from her own cheek.

According to the results from the Toronto-based company, WBZ News reporter Christina Hager is 40% Alaskan malamute, 35% shar-pei and 25% labrador.

Hager also sent her samples to two other pet genetic testing companies. The Melbourne, Australia- and Florida-based company Orivet reported that the sample “failed to provide the data necessary to perform the breed ID analysis”. Meanwhile, Washington-based company Wisdom Panel said that the sample “didn’t provide … enough DNA to produce a reliable result”.

WBZ News’ latest report comes after its investigations team sent in a sample from New Hampshire pet owner Michelle Leininger’s own cheek to DNA My Dog last year. In turn, the results declared Leininger 40% border collie, 32% cane corso and 28% bulldog.

[…]

Speaking to WBZ News last year following Leininger’s results, Lisa Moses, a Harvard Medical School veterinarian and bioethicist said: “I think that is a red flag for sure … A company should know if they’ve in any basic way analyzed a dog’s DNA, that that is not a dog.”

[…]

Source: Pet DNA testing company in doghouse after identifying human as canine | Dogs | The Guardian

COVID-19 Leaves Its Mark on the Brain. Significant Drops in IQ Scores Are Noted.

From the very early days of the pandemic, brain fog emerged as a significant health condition that many experience after COVID-19.

Brain fog is a colloquial term that describes a state of mental sluggishness or lack of clarity and haziness that makes it difficult to concentrate, remember things and think clearly.

Fast-forward four years and there is now abundant evidence that being infected with SARS-CoV-2 – the virus that causes COVID-19 – can affect brain health in many ways.

In addition to brain fog, COVID-19 can lead to an array of problems, including headaches, seizure disorders, strokes, sleep problems, and tingling and paralysis of the nerves, as well as several mental health disorders.

A large and growing body of evidence amassed throughout the pandemic details the many ways that COVID-19 leaves an indelible mark on the brain. But the specific pathways by which the virus does so are still being elucidated, and curative treatments are nonexistent.

Now, two new studies published in the New England Journal of Medicine shed further light on the profound toll of COVID-19 on cognitive health.

[…]

Most recently, a new study published in the New England Journal of Medicine assessed cognitive abilities such as memory, planning and spatial reasoning in nearly 113,000 people who had previously had COVID-19. The researchers found that those who had been infected had significant deficits in memory and executive task performance.

[…]

In the same study, those who had mild and resolved COVID-19 showed cognitive decline equivalent to a three-point loss of IQ. In comparison, those with unresolved persistent symptoms, such as people with persistent shortness of breath or fatigue, had a six-point loss in IQ. Those who had been admitted to the intensive care unit for COVID-19 had a nine-point loss in IQ. Reinfection with the virus contributed an additional two-point loss in IQ, as compared with no reinfection.

[…]

Another study in the same issue of the New England Journal of Medicine involved more than 100,000 Norwegians between March 2020 and April 2023. It documented worse memory function at several time points up to 36 months following a positive SARS-CoV-2 test.

Taken together, these studies show that COVID-19 poses a serious risk to brain health, even in mild cases, and the effects are now being revealed at the population level.

A recent analysis of the U.S. Current Population Survey showed that after the start of the COVID-19 pandemic, an additional one million working-age Americans reported having “serious difficulty” remembering, concentrating or making decisions than at any time in the preceding 15 years. Most disconcertingly, this was mostly driven by younger adults between the ages of 18 to 44.

Data from the European Union shows a similar trend – in 2022, 15 percent of people in the EU reported memory and concentration issues.

[…]

Source: COVID-19 Leaves Its Mark on the Brain. Significant Drops in IQ Scores Are Noted. | Scientific American

Universal Antivenom for Snake Bites Might Soon Be a Reality

[…]

a team of scientists says they’ve created a lab-made antibody geared to counteract toxic bites from a wide variety of snakes. In early tests with mice, the uber-antivenom appeared to work as intended.

Snake antivenom is typically derived from the antibodies of horses or other animals that produce a strong immune response to snake toxins. These donated antibodies can be highly effective at preventing serious injury and death from a snakebite, but they come with serious limitations.

The chemical makeup of one species’s toxin can vary significantly from another’s, for instance, so antibodies to one specific toxin provide little protection against others. Manufacturers can try to work around this by inoculating animals with several toxins at once, but this method has drawbacks, such as needing a higher dose of antivenom since only some of the antibodies will have any effect.

[…]

Though snake toxins are remarkably complex and different from one another, even within the same class, the team managed to find sections of these toxins that were pretty similar across different species.

The scientists produced a variety of 3FTx toxins in the lab and then screened them against a database of more than 50 billion synthetic antibodies, looking for ones that could potentially neutralize several toxins at once. After a few rounds of selection, they ultimately identified one antibody that seemed to broadly neutralize at least five different 3FTx variants, called 95Mat5. They then put the antibody to a real-life test, finding that it fully protected mice from dying from the toxins of the many-banded krait, Indian spitting cobra, and black mamba, in some cases better than conventional antivenom; it also offered some protection against venom from the king cobra.

[…]

As seen with the king cobra, the 95Mat5 antibody alone may not work against every elapid snake. And it wouldn’t protect against bites from viper snakes, the other major family of venomous snakes. But the team’s process of identifying broadly neutralizing antibodies—adapted from similar research on the HIV virus—could be used to find other promising antivenom candidates.

[…]

Source: Universal Antivenom for Snake Bites Might Soon Be a Reality

New evidence changes key ideas about Earth’s climate history – it wasn’t that hot

A new study published in Science resolves a long-standing scientific debate, and it stands to completely change the way we think about Earth’s climate evolution.

The research debunks the idea that Earth’s surface (across land and sea) has experienced really hot temperatures over the last two billion years. Instead, it shows that Earth has had a relatively stable and mild climate.

Temperature is an important control over chemical reactions that govern life and our environment. This ground-breaking work will have significant implications for scientists working on or questions surrounding biological and climate .

[…]

In the work, Dr. Isson and Ph.D. student Sofia Rauzi adopted novel methods to illuminate a history of Earth’s surface .

They utilized five unique data records derived from different rock types including shale, iron oxide, carbonate, silica, and phosphate. Collectively, these ‘geochemical’ records comprise over 30,000 that span Earth’s multi-billion-year history.

To date, the study is the most comprehensive collation and interpretation of one of the oldest geochemical records—. Oxygen isotopes are different forms of the element oxygen. It is also the first study to use all five existing records to chart a consistent ‘map’ of temperature across an enormous portion of geological time.

“By pairing oxygen isotope records from different minerals, we have been able to reconcile a unified history of temperature on Earth that is consistent across all five records, and the oxygen isotopic composition of seawater,” says Dr. Isson.

The study disproves ideas that early oceans were hot with temperatures greater than 60°C prior to approximately half a billion years ago, before the rise of animals and land plants. The data indicates relatively stable and temperate early-ocean and temperatures of around 10°C which upends current thinking about the environment that complex life evolved in.

The work produces the first ever record of the evolution of terrestrial (land-based) and marine clay abundance throughout Earth history. This is the first direct evidence for an intimate link between the evolution of plants, marine creatures that make skeletons and shells out of silica (siliceous life forms), clay formation, and .

“The results suggest that the process of clay formation may have played a key role in regulating climate on early Earth and sustaining the temperate conditions that allowed for the evolution and proliferation of life on Earth,” says Dr. Isson.

[…]

The work produces the first ever record of the evolution of terrestrial (land-based) and marine clay abundance throughout Earth history. This is the first direct evidence for an intimate link between the evolution of plants, marine creatures that make skeletons and shells out of silica (siliceous life forms), clay formation, and .

“The results suggest that the process of clay formation may have played a key role in regulating climate on early Earth and sustaining the temperate conditions that allowed for the evolution and proliferation of life on Earth,” says Dr. Isson.

Source: New evidence changes key ideas about Earth’s climate history