Human DNA can be pulled from the air: A Boon For Science, While Terrifying Others

Environmental DNA sampling is nothing new. Rather than having to spot or catch an animal, instead the DNA from the traces they leave can be sampled, giving clues about their genetic diversity, their lineage (e.g. via mitochondrial DNA) and the population’s health. What caught University of Florida (UoF) researchers by surprise while they were using environmental DNA sampling to study endangered sea turtles, was just how much human DNA they found in their samples. This led them to perform a study on the human DNA they sampled in this way, with intriguing implications.

Ever since genetic sequencing became possible there have been many breakthroughs that have made it more precise, cheaper and more versatile. The argument by these UoF researchers in their paper in Nature Ecology & Evolution is that although there is a lot of potential in sampling human environmental DNA (eDNA) to study populations much like is done today already with wastewater sampling, only more universally. This could have great benefits in studying human populations much how we monitor other animal species already using their eDNA and similar materials that are discarded every day as a part of normal biological function.

The researchers were able to detect various genetic issues in the human eDNA they collected, demonstrating the viability of using it as a population health monitoring tool. The less exciting fallout of their findings was just how hard it is to prevent contamination of samples with human DNA, which could possibly affect studies. Meanwhile the big DNA elephant in the room is that of individual level tracking, which is something that’s incredibly exciting to researchers who are monitoring wild animal populations. Unlike those animals, however, homo sapiens are unique in that they’d object to such individual-level eDNA-based monitoring.

What the full implications of such new tools will be is hard to say, but they’re just one of the inevitable results as our genetic sequencing methods improve and humans keep shedding their DNA everywhere.

Source: Human DNA Is Everywhere: A Boon For Science, While Terrifying Others | Hackaday

The ‘invisible’ cellulose coatings that mitigate surface transmission of pathogens (kills covid on door handles)

Research has shown that a thin cellulose film can inactivate the SARS-CoV-2 virus within minutes, inhibit the growth of bacteria including E. coli, and mitigate contact transfer of pathogens.

The coating consists of a thin film of cellulose fiber that is invisible to the , and is abrasion-resistant under dry conditions, making it suitable for use on high traffic objects such as door handles and handrails.

The coating was developed by scientific teams from the University of Birmingham, Cambridge University, and FiberLean Technologies, who worked on a project to formulate treatments for glass, metal or laminate surfaces that would deliver long-lasting protection against the COVID-19 virus.

[…]

a coating made from micro-fibrillated cellulose (MFC)

[…]

The COVID-19 virus is known to remain active for several days on surfaces such as plastic and stainless steel, but for only a few hours on newspaper.

[…]

The researchers found that the porous nature of the film plays a significant role: it accelerates the evaporation rate of liquid , and introduces an imbalanced osmotic pressure across bacteria membrane.

They then tested whether the coating could inhibit surface transmission of SARS-CoV-2. Here they found a three-fold reduction of infectivity when droplets containing the were left on the coating for 5 minutes, and, after 10 minutes, the infectivity fell to zero.

[…]

Professor Zhang commented, “The risk of surface transmission, as opposed to aerosol transmission, comes from large droplets which remain infective if they land on hard surfaces, where they can be transferred by touch. This surface coating technology uses sustainable materials and could potentially be used in conjunction with other antimicrobial actives to deliver a long-lasting and slow-release antimicrobial effect.”

The researchers confirmed the stability of the coating by mechanical scraping tests, where the showed no noticeable damage when dry, but easy removal from the surface when wetted, making it convenient and suitable for daily cleaning and disinfection practice.

The paper is published in the journal ACS Applied Materials & Interfaces.

More information: Shaojun Qi et al, Porous Cellulose Thin Films as Sustainable and Effective Antimicrobial Surface Coatings, ACS Applied Materials & Interfaces (2023). DOI: 10.1021/acsami.2c23251

Source: The ‘invisible’ cellulose coatings that mitigate surface transmission of pathogens

LLM emergent behavior written off as rubbish – small models work fine but are measured poorly

[…] As defined in academic studies, “emergent” abilities refers to “abilities that are not present in smaller-scale models, but which are present in large-scale models,” as one such paper puts it. In other words, immaculate injection: increasing the size of a model infuses it with some amazing ability not previously present.

[…]

those emergent abilities in AI models are a load of rubbish, say computer scientists at Stanford.

Flouting Betteridge’s Law of Headlines, Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo answer the question posed by their paper, Are Emergent Abilities of Large Language Models a Mirage?, in the affirmative.

[…]

When industry types talk about emergent abilities, they’re referring to capabilities that seemingly come out of nowhere for these models, as if something was being awakened within them as they grow in size. The thinking is that when these LLMs reach a certain scale, the ability to summarize text, translate languages, or perform complex calculations, for example, can emerge unexpectedly.

[…]

Stanford’s Schaeffer, Miranda, and Koyejo propose that when researchers are putting models through their paces and see unpredictable responses, it’s really due to poorly chosen methods of measurement rather than a glimmer of actual intelligence.

Most (92 percent) of the unexpected behavior detected, the team observed, was found in tasks evaluated via BIG-Bench, a crowd-sourced set of more than 200 benchmarks for evaluating large language models.

One test within BIG-Bench highlighted by the university trio is Exact String Match. As the name suggests, this checks a model’s output to see if it exactly matches a specific string without giving any weight to nearly right answers. The documentation even warns:

The EXACT_STRING_MATCH metric can lead to apparent sudden breakthroughs because of its inherent all-or-nothing discontinuity. It only gives credit for a model output that exactly matches the target string. Examining other metrics, such as BLEU, BLEURT, or ROUGE, can reveal more gradual progress.

The issue with using such pass-or-fail tests to infer emergent behavior, the researchers say, is that nonlinear output and lack of data in smaller models creates the illusion of new skills emerging in larger ones. Simply put, a smaller model may be very nearly right in its answer to a question, but because it is evaluated using the binary Exact String Match, it will be marked wrong whereas a larger model will hit the target and get full credit.

It’s a nuanced situation. Yes, larger models can summarize text and translate languages. Yes, larger models will generally perform better and can do more than smaller ones, but their sudden breakthrough in abilities – an unexpected emergence of capabilities – is an illusion: the smaller models are potentially capable of the same sort of thing but the benchmarks are not in their favor. The tests favor larger models, leading people in the industry to assume the larger models enjoy a leap in capabilities once they get to a certain size.

In reality, the change in abilities is more gradual as you scale up or down. The upshot for you and I is that applications may not need a huge but super powerful language model; a smaller one that is cheaper and faster to customize, test, and run may do the trick.

[…]

In short, the supposed emergent abilities of LLMs arise from the way the data is being analyzed and not from unforeseen changes to the model as it scales. The researchers emphasize they’re not precluding the possibility of emergent behavior in LLMs; they’re simply stating that previous claims of emergent behavior look like ill-considered metrics.

[…]

Source: LLM emergent behavior written off as ‘a mirage’ by study • The Register