A new study is redefining how we understand affective polarization. The study proposes that disappointment, rather than hatred, may be the dominant emotion driving the growing divide between ideological groups.
The findings are published in the journal Cognition and Emotion. The team was led by Ph.D. student Mabelle Kretchner from the Department of Psychology at The Hebrew University of Jerusalem, under the supervision of Prof. Eran Halperin and in collaboration with Prof. Sivan Hirsch-Hoefler from Reichman University and Dr. Julia Elad-Strenger from Bar Ilan University.
Affective polarization, characterized by deepening negative feelings between members of opposing ideological groups, is a major concern to democratic stability worldwide. While numerous studies have examined the causes and potential solutions to this phenomenon, the emotional underpinnings of affective polarization have remained poorly understood.
[…]
“Disappointment is an emotion that encapsulates both positive and negative experiences,” explains Kretchner.
“While hatred is destructive and focuses on viewing the outgroup as fundamentally evil, disappointment reflects a more complex dynamic. It includes unmet expectations and a sense of loss, but also retains a recognition of shared goals and the potential for positive change. This dual nature makes it a more accurate representation of the complexity embedded in ideological intergroup relations.”
Across five studies conducted in the US and Israel, disappointment was the only emotion consistently linked to affective polarization, while other negative emotions did not show the same consistent association. Notably, hatred did not predict affective polarization in any of the studies, even during politically charged periods such as the Capitol riots, the US withdrawal from Afghanistan, and the Supreme Court hearings on Roe v. Wade.
[…]
This finding suggests that interventions aimed at reducing affective polarization might be more effective if they target specific emotions underlying affective polarization like disappointment.
As societies across the globe grapple with rising political tensions, the insights from this study offer a fresh perspective on how to heal divisions
[…]
More information: Eran Halperin et al, The affective gap: a call for a comprehensive examination of the discrete emotions underlying affective polarization, Cognition and Emotion (2024). DOI: 10.1080/02699931.2024.2348028
Per- and polyfluoroalkyl chemicals, known commonly as PFAS, could take over 40 years to flush out of contaminated groundwater in North Carolina’s Cumberland and Bladen counties, according to a new study from North Carolina State University. The study used a novel combination of data on PFAS, groundwater age-dating tracers, and groundwater flux to forecast PFAS concentrations in groundwater discharging to tributaries of the Cape Fear River in North Carolina.
The researchers sampled groundwater in two different watersheds adjacent to the Fayetteville Works fluorochemical plant in Bladen County.
“There’s a huge area of PFAS contaminated groundwater — including residential and agricultural land — which impacts the population in two ways,” says David Genereux, professor of marine, earth and atmospheric sciences at NC State and leader of the study.
“First, there are over 7,000 private wells whose users are directly affected by the contamination. Second, groundwater carrying PFAS discharges into tributaries of the Cape Fear River, which affects downstream users of river water in and near Wilmington.”
The researchers tested the samples they took to determine PFAS types and levels, then used groundwater age-dating tracers, coupled with atmospheric contamination data from the N.C. Department of Environmental Quality and the rate of groundwater flow, to create a model that estimated both past and future PFAS concentrations in the groundwater discharging to tributary streams.
They detected PFAS in groundwater up to 43 years old, and concentrations of the two most commonly found PFAS — hexafluoropropylene oxide-dimer acid (HFPO−DA) and perfluoro-2-methoxypropanoic acid (PMPA) — averaged 229 and 498 nanograms per liter (ng/L), respectively. For comparison, the maximum contaminant level (MCL) issued by the U.S. Environmental Protection Agency for HFPO-DA in public drinking water is 10 ng/L. MCLs are enforceable drinking water standards.
“These results suggest it could take decades for natural groundwater flow to flush out groundwater PFAS still present from the ‘high emission years,’ roughly the period between 1980 and 2019,” Genereux says. “And this could be an underestimate; the time scale could be longer if PFAS is diffusing into and out of low-permeability zones (clay layers and lenses) below the water table.”
The researchers point out that although air emissions of PFAS are substantially lower now than they were prior to 2019, they are not zero, so some atmospheric deposition of PFAS seems likely to continue to feed into the groundwater.
“Even a best-case scenario — without further atmospheric deposition — would mean that PFAS emitted in past decades will slowly flush from groundwater to surface water for about 40 more years,” Genereux says. “We expect groundwater PFAS contamination to be a multi-decade problem, and our work puts some specific numbers behind that. We plan to build on this work by modeling future PFAS at individual drinking water wells and working with toxicologists to relate past PFAS levels at wells to observable health outcomes.”
Craig R. Jensen, David P. Genereux, D. Kip Solomon, Detlef R. U. Knappe, Troy E. Gilmore. Forecasting and Hindcasting PFAS Concentrations in Groundwater Discharging to Streams near a PFAS Production Facility. Environmental Science & Technology, 2024; 58 (40): 17926 DOI: 10.1021/acs.est.4c06697
The personal care products we use on a daily basis significantly affect indoor air quality, according to new research by a team at EPFL. When used indoors, these products release a cocktail of more than 200 volatile organic compounds (VOCs) into the air, and when those VOCs come into contact with ozone, the chemical reactions that follow can produce new compounds and particles that may penetrate deep into our lungs. Scientists don’t yet know how inhaling these particles on a daily basis affects our respiratory health.
The EPFL team’s findings have been published in Environmental Science & Technology Letters.
[…]
In one test, the researchers applied the products under typical conditions, while the air quality was carefully monitored. In another test, they did the same thing but also injected ozone, a reactive outdoor gas that occurs in European latitudes during the summer months.
[…]
However, when ozone was introduced into the chamber, not only new VOCs but also new particles were generated, particularly from perfume and sprays, exceeding concentrations found in heavily polluted urban areas such as downtown Zurich.
“Some molecules ‘nucleate’—in other words, they form new particles that can coagulate into larger ultrafine particles that can effectively deposit into our lungs,” explains Licina. “In my opinion, we still don’t fully understand the health effects of these pollutants, but they may be more harmful than we think, especially because they are applied close to our breathing zone. This is an area where new toxicological studies are needed.”
Preventive measures
To limit the effect of personal care products on indoor air quality, we could consider several alternatives for how buildings are engineered: introducing more ventilation—especially during the products’ use—incorporating air-cleaning devices (e.g., activated carbon-based filters combined with media filters), and limiting the concentration of indoor ozone.
Another preventive measure is also recommended, according to Licina: “I know this is difficult to hear, but we’re going to have to reduce our reliance on these products, or if possible, replace them with more natural alternatives that contain fragrant compounds with low chemical reactivity. Another helpful measure would be to raise awareness of these issues among medical professionals and staff working with vulnerable groups, such as children and the elderly.”
More information: Tianren Wu et al, Indoor Emission, Oxidation, and New Particle Formation of Personal Care Product Related Volatile Organic Compounds, Environmental Science & Technology Letters (2024). DOI: 10.1021/acs.estlett.4c00353
Microsoft’s Outlook app is crashing for European users due to memory problems, Redmond has warned, and evidence suggests the problems are spreading to the US.
“We’re investigating an issue in which users in Europe may be experiencing crashing, not receiving emails or observing high memory usage when using the Outlook client,” Redmond warned.
“We’re analyzing data from customers experiencing crashes and high memory usage when using the New Outlook desktop app. We’re reviewing service telemetry and reproducing the issue internally to develop a mitigation plan.”
So far, there is no word on Microsoft’s plan, but social media reports suggest the US East Coast at least is suffering similar problems. Downdetector indicates the issue appears to be spreading.
“It’s been spreading across the country like the common cold now, and I can’t seem to figure out what is causing it,” reported one user. “There have been no changes to the environment and no updates to the Windows desktops that are having this issue.”
Microsoft’s engineers are working on the issue and trying to find out what the problem is. It’s not a good look for a software giant’s main email system.
The US government’s General Services Administration’s (GSA) facial matching login service is now generally available to the public and other federal agencies, despite its own recent report admitting the tech is far from perfect.
The GSA announced general availability of remote identity verification (RiDV) technology through login.gov, and the service’s availability to other federal government agencies yesterday. According to the agency, the technology behind the offering is “a new independently certified” solution that complies with the National Institute of Standards and Technology’s (NIST) 800-63 identity assurance level 2 (IAL2) standard.
IAL2 identity verification involves using either remote or in-person verification of a person’s identity via biometric data along with some physical element, like an ID photograph, access to a cellphone number, for example.
“This new IAL2-compliant offering adds proven one-to-one facial matching technology that allows Login.gov to confirm that a live selfie taken by a user matches the photo on a photo ID, such as a driver’s license, provided by the user,” the GSA said.
The Administration noted that the system doesn’t use “one-to-many” face matching technology to compare users to others in its database, and doesn’t use the images for any purpose other than verifying a user’s identity.
[…]
In a report issued by the GSA’s Office of the Inspector General in early 2023, the Administration was called out for saying it implemented IAL2-level identity verification as early as 2018, but never actually supporting the requirements to meet the standard.
“GSA knowingly billed customer agencies over $10 million for services, including alleged IAL2 services that did not meet IAL2 standards,” the report claimed.
[…]
Fast forward to October of last year, and the GSA said it was embracing facial recognition tech on login.gov with plans to test it this year – a process it began in April. Since then, however, the GSA has published pre-press findings of a study it conducted of five RiDV technologies, finding that they’re still largely unreliable.
The study anonymized the results of the five products, making it unclear which were included in the final pool or how any particular one performed. Generally, however, the report found that the best-performing product still failed 10 percent of the time, and the worst had a false negative rate of 50 percent, meaning its ability to properly match a selfie to a government ID was no better than chance.
Higher rejection rates for people with darker skin tones were also noted in one product, while another was more accurate for people of AAPI descent, but less accurate for everyone else – hardly the equitability the GSA said it wanted in an RiDV product last year.
[…]
It’s unclear what solution has been deployed for use on login.gov. The only firm we can confirm has been involved though the process is LexisNexis, which previously acknowledged to The Register that it has worked with the GSA on login.gov for some time.
That said, LexisNexis’ CEO for government risk solutions told us recently that he’s not convinced the GSA’s focus on adopting IAL2 RiDV solutions at the expense of other biometric verification methods is the best approach.
“Any time you rely on a single tool, especially in the modern era of generative AI and deep fakes … you are going to have this problem,” Haywood “Woody” Talcove told us during a phone interview last month. “I don’t think NIST has gone far enough with this workflow.”
Talcove told us that facial recognition is “pretty easy to game,” and said he wants a multi-layered approach – one that it looks like GSA has declined to pursue given how quickly it’s rolling out a solution.
“What this study shows is that there’s a level of risk being injected into government agencies completely relying on one tool,” Talcove said. “We’ve gotta go further.”
Along with asking the GSA for more details about its chosen RiDV solution, we also asked for some data about its performance. We didn’t get an answer to that question, either.
Walled Culture has been writing about Italy’s Piracy Shield system for a year now. It was clear from early on that its approach of blocking Internet addresses (IP addresses) to fight alleged copyright infringement – particularly the streaming of football matches – was flawed, and risked turning into another fiasco like France’s failed Hadopi law. The central issue with Piracy Shield is summed up in a recent post on the Disruptive Competition Blog:
The problem is that Italy’s Piracy Shield enables the blocking of content at the IP address and DNS level, which is particularly problematic in this time of shared IP addresses. It would be similar to arguing that if in a big shopping mall, in which dozens of shops share the same address, one shop owner is found to sell bootleg vinyl records with pirated music, the entire mall needs to be closed and all shops are forced to go out of business.
As that post points out, Italy’s IP blocking suffers from several underlying problems. One is overblocking, which has already happened, as Walled Culture noted back in March. Another issue is lack of transparency:
The Piracy Shield that has been implemented in Italy is fully automated, which prevents any transparency on the notified IP addresses and lacks checks and balances performed by third parties, who could verify whether the notified IP addresses are exclusively dedicated to piracy (and should be blocked) or not.
Piracy Shield isn’t working, and causes serious collateral damage, but instead of recognising this, its supporters have doubled down, and have just convinced the Italian parliament to pass amendments making it even worse, reported here by TorrentFreak:
VPN and DNS services anywhere on planet earth will be required to join Piracy Shield and start blocking pirate sites, most likely at their own expense, just like Italian ISPs are required to do already.
…
Moving forward, if pirate sites share an IP address with entirely innocent sites, and the innocent sites are outnumbered, ISPs, VPNs and DNS services will be legally required to block them all.
A new offence has been created that is aimed at service providers, including network access providers, who fail to report promptly illegal conduct by their users to the judicial authorities in Italy or the police there. Maximum punishment is not just a fine, but imprisonment for up to one year. Just why this is absurd is made clear by this LinkedIn comment by Diego Ciulli, Head of Government Affairs and Public Policy, Google Italy (translation by DeepL):
Under the label of ‘combating piracy’, the Senate yesterday approved a regulation obliging digital platforms to notify the judicial authorities of all copyright infringements – present, past and future – of which they become aware. Do you know how many there are in Google’s case? Currently, 9,756,931,770.
In short, the Senate is asking us to flood the judiciary with almost 10 billion URLs – and foresees jail time if we miss a single notification.
If the rule is not corrected, the risk is to do the opposite of the spirit of the law: flooding the judiciary, and taking resources away from the fight against piracy.
The new law will make running an Internet access service so risky that many will probably just give up, reducing consumer choice. Freedom of speech will be curtailed, online security weakened, and Italy’s digital infrastructure will be degraded. The end result of this law will be an overall impoverishment of Italian Internet users, Italian business, and the Italian economy. And all because of one industry’s obsession with policing copyright at all costs
Orbital mechanics is a fun subject, as it involves a lot of seemingly empty space that’s nevertheless full of very real forces, all of which must be taken into account lest one’s spacecraft ends up performing a sudden lithobraking maneuver into a planet or other significant collection of matter in said mostly empty space. The primary concern here is that of gravitational pull, and the way it affects one’s trajectory and velocity. With a single planet providing said gravitational pull this is quite straightforward to determine, but add in another body (like the Moon) and things get trickier. Add another big planetary body (or a star like our Sun), and you suddenly got yourself the restricted three-body problem, which has vexed mathematicians and others for centuries.
The three-body problem concerns the initial positions and velocities of three point masses. As they orbit each other and one tries to calculate their trajectories using Newton’s laws of motion and law of universal gravitation (or their later equivalents), the finding is that of a chaotic system, without a closed-form solution. In the context of orbital mechanics involving the Earth, Moon and Sun this is rather annoying, but in 1772 Joseph-Louis Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with earlier work by Leonhard Euler led to the discovery of what today are known as Lagrangian (or Lagrange) points.
Having a few spots in an N-body configuration where you can be reasonably certain that your spacecraft won’t suddenly bugger off into weird directions that necessitate position corrections using wasteful thruster activations is definitely a plus. This is why especially space-based observatories such as the James Webb Space Telescope love to hang around in these spots.
Stable and Unstable Stable
Although the definition of Lagrange points often makes it sound like you can put a spacecraft in that location and it’ll remain there forever, it’s essential to remember that ‘stationary’ only makes sense in particular observer’s reference frame. The Moon orbits the Earth, which orbits the Sun, which ultimately orbits the center of the Milky Way, which moves relative to other galaxies. Or it’s just the expansion of space-time which make it appear that the Milky Way moves, but that gets one quickly into the fun corners of theoretical physics.
A contour plot of the effective potential defined by gravitational and centripetal forces. (Credit: NASA)
Within the Earth-Sun system, there are five Lagrange points (L1 – L5), of which L2 is currently the home of the James Webb Space Telescope (JWST) and was the home to previous observatories (like the NASA WMAP spacecraft) that benefit from always being in the shadow of the Earth. Similarly, L1 is ideal for any Sun observatory, as like L2 it is located within easy communication distance
Perhaps shockingly, the L3 point is not very useful to put any observatories or other spacecraft, as the Sun would always block communication with Earth. What L3 has in common with L1 and L2 is that all of these are unstable Lagrange points, requiring course and attitude adjustments approximately every 23 days. This contrasts with L4 and L5, which are the two ‘stable’ points. This can be observed in the above contour plot, where L4 and L5 are on top of ‘hills’ and L1 through L3 are on ‘saddles’ where the potential curves up in one direction and down another.
One way to look at it is that satellites placed in the unstable points have a tendency to ‘wander off’, as they don’t have such a wide region of relatively little variance (contour lines placed far from each other) as L4 and L5 do. While this makes these stable points look amazing, they are not as close to Earth as L1 and L2, and they have a minor complication in the fact that they are already occupied, much like the Earth-Moon L4 and L5 points.
Because of how stable the L4 and L5 points are, the Earth-Moon system ones have found themselves home to the Kordylewski clouds. These are effectively concentrations of dust which were first photographed by Polish astronomer Kazimierz Kordylewski in 1961 and confirmed multiple times since. Although a very faint phenomenon, there are numerous examples of objects caught at these points in e.g. the Sun-Neptune system (Neptune trojans) and the Sun-Mars system (Mars trojans). Even our Earth has picked up a couple over the years, many of them asteroids. Of note that is the Earth’s Moon is not in either of these Lagrange points, having become gravitationally bound as a satellite.
All of which is a long way to say that it’s okay to put spacecraft in L4 and L5 points as long as you don’t mind fragile technology sharing the same region of space as some very large rocks, with an occasional new rocky friend getting drawn into the Lagrange point.
Stuff in Lagrange Points
A quick look at the Wikipedia list of objects at Lagrange points provides a long list past and current natural and artificial objects at these locations, across a variety of system. Sticking to just the things that we humans have built and sent into the Final Frontier, we can see that only the Sun-Earth and Earth-Moon systems have so far seen their Lagrange points collect more than space rocks and dust.
These will be joined if things go well by IMAP in 2025 along with SWFO-L1, NEO Surveyor in 2027. These spacecraft mostly image the Sun, monitor solar wind, image the Earth and its weather patterns, for which this L1 point is rather excellent. Of note here is that strictly taken most of these do not simply linger at the L1 point, but rather follow a Lissajous orbit around said Lagrange point. This particular orbital trajectory was designed to compensate for the instability of the L1-3 points and minimize the need for course corrections.
Moving on, the Sun-Earth L2 point is also rather busy:
Many of the planned spacecraft that should be joining the L2 point are also observatories for a wide range of missions, ranging from general observations in a wide range of spectra to exoplanet and comet hunting.
Despite the distance and hazards of the Sun-Earth L4 and L5 points, these host the Solar TErrestrial RElations Observatory (STEREO) A and B solar observation spacecraft. The OSIRIS-REx and Hayabusa 2 spacecraft have passed through or near one of these points during their missions. The only spacecraft planned to be positioned at one of these points is ESA’s Vigil, which is scheduled to launch by 2031 and will be at L5.
Contour plot of the Earth-Moon Lagrange points. (Credit: NASA)
Only the Moon’s L2 point currently has a number of spacecraft crowding about, with NASA’s THEMIS satellites going through their extended mission observations, alongside the Chinese relay satellite Queqiao-2 which supported the Chang’e 6 sample retrieval mission.
In terms of upcoming spacecraft to join the sparse Moon Lagrange crowd, the Exploration Gateway Platform was a Boeing-proposed lunar space station, but it was discarded in favor of the Lunar Gateway which will be placed in a polar near-rectilinear halo orbit (NRHO) with an orbital period of about 7 days. This means that this space station will cover more of the Moon’s orbit rather than remain stationary. It is intended to be launched in 2027, as part of the NASA Artemis program.
Orbital Mechanics Fun
The best part of orbits is that you have so many to pick from, allowing you to not only pick the ideal spot to idle at if that’s the mission profile, but also to transition between them such as when traveling from the Earth to the Moon with e.g. a trans-lunar injection (TLI) maneuver. This involves a low Earth orbit (LEO) which transitions into a powered, high eccentric orbit which approaches the Moon’s gravitational sphere of influence.
Within this and low-energy transfer alternatives the restricted three-body problem continuously applies, meaning that the calculations for such a transfer have to account for as many variables as possible, while in the knowledge that there is no perfect solution. With our current knowledge level we can only bask in the predictable peace and quiet that are the Lagrange points, if moving away from all those nasty gravity wells like the Voyager spacecraft did is not an option.