AI and smart mouthguards: the new frontline in fight against brain injuries

There was a hidden spectator of the NFL match between the Baltimore Ravens and Tennessee Titans in London on Sunday: artificial intelligence. As crazy as it may sound, computers have now been taught to identify on-field head impacts in the NFL automatically, using multiple video angles and machine learning. So a process that would take 12 hours – for each game – is now done in minutes. The result? After every weekend, teams are sent a breakdown of which players got hit, and how often.

This tech wizardry, naturally, has a deeper purpose. Over breakfast the NFL’s chief medical officer, Allen Sills, explained how it was helping to reduce head impacts, and drive equipment innovation.

Players who experience high numbers can, for instance, be taught better techniques. Meanwhile, nine NFL quarterbacks and 17 offensive linemen are wearing position-specific helmets, which have significantly more padding in the areas where they experience more impacts.

What may be next? Getting accurate sensors in helmets, so the force of each tackle can also be estimated, is one area of interest. As is using biomarkers, such as saliva and blood, to better understand when to bring injured players back to action.

If that’s not impressive enough, this weekend rugby union became the first sport to adopt smart mouthguard technology, which flags big “hits” in real time. From January, whenever an elite player experiences an impact in a tackle or ruck that exceeds a certain threshold, they will automatically be taken off for a head injury assessment by a doctor.

No wonder Dr Eanna Falvey, World Rugby’s chief medical officer, calls it a “gamechanger” in potentially identifying many of the 18% of concussions that now come to light only after a match.

[…]

As things stand, World Rugby is adding the G-force and rotational acceleration of a hit to determine when to automatically take a player off for an HIA. Over the next couple of years, it wants to improve its ability to identify the impacts with clinical meaning – which will also mean looking at other factors, such as the duration and direction of the impact, as well.

[…]

Then there is the ability to use the smart mouthguard to track load over time. “It’s one thing to assist to identify concussions,” he says. “It’s another entirely to say it’s going to allow coaches and players to track exactly how many significant head impacts they have in a career – especially with all the focus on long-term health risks. If they can manage that load, particularly in training, that has performance and welfare benefits.”

[…]

Source: AI and smart mouthguards: the new frontline in fight against brain injuries | Sport | The Guardian

Spacecraft re-entry filling the atmosphere with metal vapor – and there will be more of it coming in

A group of scientists studying the effects of rocket and satellite reentry vaporization in Earth’s atmosphere have found some startling evidence that could point to disastrous environmental effects on the horizon.

The study, published in the Proceedings of the National Academy of Sciences, found that around 10 percent of large (>120 nm) sulfuric acid particles in the stratosphere contain aluminum and other elements consistent with the makeup of alloys used in spacecraft construction, including lithium, copper and lead. The other 90 percent comes from “meteoric smoke,” which are the particles left over when meteors vaporize during atmospheric entry, and that naturally-occurring share is expected to plummet drastically.

“The space industry has entered an era of rapid growth,” the boffins said in their paper, “with tens of thousands of small satellites planned for low earth orbit.

“It is likely that in the next few decades, the percentage of stratospheric sulfuric acid particles that contain aluminum and other metals from satellite reentry will be comparable to the roughly 50 percent that now contain meteoric metals,” the team concluded.

Atmospheric circulation at those altitudes (beginning somewhere between four and 12 miles above ground level and extending up to 31 miles above Earth) means such particles are unlikely to have an effect on the surface environment or human health, the researchers opined.

Stratospheric changes might be even scarier, though

Earth’s stratosphere has classically been considered pristine, said Dan Cziczo, one of the study’s authors and head of Purdue University’s department of Earth, atmospheric and planetary studies. “If something is changing in the stratosphere – this stable region of the atmosphere – that deserves a closer look.”

One of the major features of the stratosphere is the ozone layer, which protects Earth and its varied inhabitants from harmful UV radiation. It’s been harmed by human activity before action was taken, and an increase in aerosolized spacecraft particles could have several consequences to our planet.

One possibility is effects on the nucleation of ice and nitric acid trihydrate, which form in stratospheric clouds over Earth’s polar regions where currents in the mesosphere (the layer above the stratosphere) tend to deposit both meteoric and spacecraft aerosols.

Ice formed in the stratosphere doesn’t necessarily reach the ground, and is more likely to have effects on polar stratospheric clouds, lead author and National Oceanic and Atmospheric Administration scientists Daniel Murphy told The Register.

“Polar stratospheric clouds are involved in the chemistry of the ozone hole,” Murphy said. However, “it is too early to know if there is any impact on ozone chemistry,” he added

Along with changes in atmospheric ice formation and the ozone layer, the team said that more aerosols from vaporized spacecraft could change the stratospheric aerosol layer, something that scientists have proposed seeding in order to block more UV rays to fight the effects of global warming.

The materials being injected from spacecraft reentry is much smaller than amounts scientists have considered for intentional injection, Murphy told us. However, “intentional injection of exotic materials into the stratosphere could raise many of the same questions [as the paper] on an even bigger scale,” he noted.

[…]

Source: Spacecraft re-entry filling the atmosphere with metal vapor • The Register

Uncle Sam paid to develop a cancer drug and now one guy will get to charge whatever he wants for it

The argument for pharma patents: making new medicines is expensive, and medicines are how we save ourselves from cancer and other diseases. Therefore, we will award government-backed monopolies – patents – to pharma companies so they will have an incentive to invest their shareholders’ capital in research.

There’s plenty wrong with this argument. For one thing, pharma companies use their monopoly winnings to sell drugs, not invent drugs. For every dollar pharma spends on research, it spends three dollars on marketing:

https://www.bu.edu/sph/files/2015/05/Pharmaceutical-Marketing-and-Research-Spending-APHA-21-Oct-01.pdf

And that “R&D” isn’t what you’re thinking of, either. Most R&D spending goes to “evergreening” – coming up with minor variations on existing drugs in a bid to extend those patents for years or decades:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3680578/

Evergreening got a lot of attention recently when John Green rained down righteous fire upon Johnson & Johnson for their sneaky tricks to prevent poor people from accessing affordable TB meds, prompting this excellent explainer from the Arm and A Leg Podcast:

https://armandalegshow.com/episode/john-green-part-1/

Another thing those monopoly profits are useful for: “pay for delay,” where pharma companies bribe generic manufacturers not to make cheap versions of drugs whose patents have expired. Sure, it’s illegal, but that doesn’t stop ’em:

https://www.ftc.gov/news-events/topics/competition-enforcement/pay-delay

But it’s their money, right? If they want to spend it on bribes or evergreening or marketing, at least some of that money is going into drugs that’ll keep you and the people you love from enduring unimaginable pain or dying slowly and hard. Surely that warrants a patent.

Let’s say it does. But what about when a pharma company gets a patent on a life-saving drug that the public paid to develop, test and refine? Publicly funded work is presumptively in the public domain, from NASA R&D to the photos that park rangers shoot of our national parks. The public pays to produce this work, so it should belong to the public, right?

That was the deal – until Congress passed the Bayh-Dole Act in 1980. Under Bayh-Dole, government-funded inventions are given away – to for-profit corporations, who get to charge us whatever they want to access the things we paid to make. The basis for this is a racist hoax called “The Tragedy Of the Commons,” written by the eugenicist white supremacist Garrett Hardin and published by Science in 1968:

https://memex.craphound.com/2019/10/01/the-tragedy-of-the-commons-how-ecofascism-was-smuggled-into-mainstream-thought/

Hardin invented an imaginary history in which “commons” – things owned and shared by a community – are inevitably overrun by selfish assholes, a fact that prompts nice people to also overrun these commons, so as to get some value out of them before they are gobbled up by people who read Garrett Hardin essays.

Hardin asserted this as a historical fact, but he cited no instances in which it happened. But when the Nobel-winning Elinor Ostrom actually went and looked at how commons are managed, she found that they are robust and stable over long time periods, and are a supremely efficient way of managing resources:

https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions

The reason Hardin invented an imaginary history of tragic commons was to justify enclosure: moving things that the public owned and used freely into private ownership. Or, to put it more bluntly, Hardin invented a pseudoscientific justification for giving away parks, roads and schools to rich people and letting them charge us to use them.

To arrive at this fantasy, Hardin deployed one of the most important analytical tools of modern economics: introspection. As Ely Devons put it: “If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, ‘What would I do if I were a horse?’”

https://pluralistic.net/2022/10/27/economism/#what-would-i-do-if-i-were-a-horse

Hardin’s hoax swept from the fringes to the center and became received wisdom – so much so that by 1980, Senators Birch Bayh and Bob Dole were able to pass a law that gave away publicly funded medicine to private firms, because otherwise these inventions would be “overgrazed” by greedy people, denying the public access to livesaving drugs.

On September 21, the NIH quietly published an announcement of one of these pharmaceutical transfers, buried in a list of 31 patent assignments in the Federal Register:

https://public-inspection.federalregister.gov/2023-20487.pdf

The transfer in question is a patent for using T-cell receptors (TCRs) to treat solid tumors from HPV, one of the only patents for treating solid tumors with TCRs. The beneficiary of this transfer is Scarlet TCR, a Delaware company with no website or SEC filings and ownership shrouded in mystery:

https://www.bizapedia.com/de/scarlet-tcr-inc.html

One person who pays attention to this sort of thing is James Love, co-founder of Knowledge Ecology International, a nonprofit that has worked for decades for access to medicines. Love sleuthed out at least one person behind Scarlet TCR: Christian Hinrichs, a researcher at Rutgers who used to work at the NIH’s National Cancer Institute:

https://www.nih.gov/research-training/lasker-clinical-research-scholars/tenured-former-scholars

Love presumes Hinrichs is the owner of Scarlet TCR, but neither the NIH nor Scarlet TCR nor Hinrichs will confirm it. Hinrichs was one of the publicly-funded researchers who worked on the new TCR therapy, for which he received a salary.

This new drug was paid for out of the public purse. The basic R&D – salaries for Hinrichs and his collaborators, as well as funding for their facilities – came out of NIH grants. So did the funding for the initial Phase I trial, and the ongoing large Phase II trial.

As David Dayen writes in The American Prospect, the proposed patent transfer will make Hinrichs a very wealthy man (Love calls it “generational wealth”):

https://prospect.org/health/2023-10-18-nih-how-to-become-billionaire-program/

This wealth will come by charging us – the public – to access a drug that we paid to produce. The public took all the risks to develop this drug, and Hinrichs stands to become a billionaire by reaping the rewards – rewards that will come by extracting fortunes from terrified people who don’t want to die from tumors that are eating them alive.

The transfer of this patent is indefensible. The government isn’t even waiting until the Phase II trials are complete to hand over our commonly owned science.

But there’s still time. The NIH is about to get a new director, Monica Bertagnolli – Hinrichs’s former boss – who will need to go before the Senate Health, Education, Labor and Pensions Committee for confirmation. Love is hoping that the confirmation hearing will present an opportunity to question Bertagnolli about the transfer – specifically, why the drug isn’t being nonexclusively licensed to lots of drug companies who will have to compete to sell the cheapest possible version.

Source: Pluralistic: Uncle Sam paid to develop a cancer drug and now one guy will get to charge whatever he wants for it (19 Oct 2023) – Pluralistic: Daily links from Cory Doctorow

Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

Faster-Than-Light ‘Quasiparticles’ Touted as Futuristic Light Source

[…]But these light sources [needed to experiment in the quantum realm] are not common. They’re expensive to build, require large amounts of land, and can be booked up by scientists months in advance. Now, a team of physicists posit that quasiparticles—groups of electrons that behave as if they were one particle—can be used as light sources in smaller lab and industry settings, making it easier for scientists to make discoveries wherever they are. The team’s research describing their findings is published today in Nature Photonics.

“No individual particles are moving faster than the speed of light, but features in the collection of particles can, and do,” said John Palastro, a physicist at the Laboratory for Laser Energetics at the University of Rochester and co-author of the new study, in a video call with Gizmodo. “This does not violate any rules or laws of physics.”

[…]

In their paper, the team explores the possibility of making plasma accelerator-based light sources as bright as larger free electron lasers by making their light more coherent, vis-a-vis quasiparticles. The team ran simulations of quasiparticles’ properties in a plasma using supercomputers made available by the European High Performance Computing Joint Undertaking (EuroHPC JU), according to a University of Rochester release.

[…]

In a linear accelerator, “every electron is doing the same thing as the collective thing,” said Bernardo Malaca, a physicist at the Instituto Superior Técnico in Portugal and the study’s lead author, in a video call with Gizmodo. “There is no electron that’s undulating in our case, but we’re still making an undulator-like spectrum.”

The researchers liken quasiparticles to the Mexican wave, a popular collective behavior in which sports fans stand up and sit down in sequence. A stadium full of people can give the illusion of a wave rippling around the venue, though no one person is moving laterally.

“One is clearly able to see that the wave could in principle travel faster than any human could, provided the audience collaborates. Quasiparticles are very similar, but the dynamics can be more extreme,” said co-author Jorge Vieira, also a physicist at the Instituto Superior Técnico, in an email to Gizmodo. “For example, single particles cannot travel faster than the speed of light, but quasiparticles can travel at any velocity, including superluminal.”

“Because quasiparticles are a result of a collective behavior, there are no limits for its acceleration,” Vieira added. “In principle, this acceleration could be as strong as in the vicinity of a black-hole, for example.”

[…]

The difference between what is perceptually happening and actually happening regarding traveling faster than light is an “unneeded distinction,” Malaca said. “There are actual things that travel faster than light, which are not individual particles, but are waves or current profiles. Those travel faster than light and can produce real faster-than-light-ish effects. So you measure things that you only associate with superluminal particles.”

The group found that the electrons’ collective quality doesn’t have to be as pristine as the beams produced by large facilities, and could practically be implemented in more “table-top” settings, Palastro said. In other words, scientists could run experiments using very bright light sources on-site, instead of having to wait for an opening at an in-demand linear accelerator.

Source: Faster-Than-Light ‘Quasiparticles’ Touted as Futuristic Light Source

Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – this should be everywhere globally

In July, Seattle-based and tech-backed nonprofit Code.org announced its 10th policy recommendation for all states “to require all students to take computer science (CS) to earn a high school diploma.” In August, Washington State Senator Lisa Wellman phoned-in her plans to introduce a bill to make computer science a Washington high school graduation requirement to the state’s Board of Education, indicating that the ChatGPT-sparked AI craze and Code.org had helped convince her of the need. Wellman, a former teacher who worked as a Programmer/System Analyst in the 80’s before becoming an Apple VP (Publishing) in the ’90s, also indicated that exposure to CS given to students in fifth grade could be sufficient to satisfy a HS CS requirement. In 2019, Wellman sponsored Microsoft-supported SB 5088 (Bill details), which required all Washington state public high schools to offer a CS class. Wellman also sponsored SB 5299 in 2021, which allows high school students to take a computer science elective in place of a third year math or science course (that may be required for college admission) to count towards graduation requirements.

And in October, Code.org CEO Hadi Partovi appeared before the Washington State Board of Education, driving home points Senator Wellman made in August with a deck containing slides calling for Washington to “require that all students take computer science to earn a high school diploma” and to “require computer science within all teacher certifications.” Like Wellman, Partovi suggested the CS high school requirement might be satisfied by middle school work (he alternatively suggested one year of foreign language could be dropped to accommodate a HS CS course). Partovi noted that Washington contained some of the biggest promoters of K-12 CS in Microsoft Philanthropies’ TEALS (TEALS founder Kevin Wang is a member of the Washington State Board of Education) and Code.org, as well some of the biggest funders of K-12 CS in Amazon and Microsoft — both which are $3,000,000+ Platinum Supporters of Code.org and have top execs on Code.org’s Board of Directors.

Source: Code.org Presses Washington To Make Computer Science a High School Graduation Requirement – Slashdot

Most kids have no clue how a computer works, let alone how to program one. It’s not difficult but an essential skill in today’s society.

IBM chip speeds up AI by combining processing and memory in the core

 

Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”

NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.

[…]

NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.

The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.

[…]

NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.

Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”

[…]

Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.

Source: ‘Mind-blowing’ IBM chip speeds up AI