The forest carbon offsets approved by the world’s leading provider and used by Disney, Shell, Gucci and other big corporations are largely worthless and could make global heating worse, according to a new investigation.
The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.
The analysis raises questions over the credits bought by a number of internationally renowned companies – some of them have labelled their products “carbon neutral”, or have told their consumers they can fly, buy new clothes or eat certain foods without making the climate crisis worse.
But doubts have been raised repeatedly over whether they are really effective.
The nine-month investigation has been undertaken by the Guardian, the German weekly Die Zeit and SourceMaterial, a non-profit investigative journalism organisation. It is based on new analysis of scientific studies of Verra’s rainforest schemes.
[…]
Verra argues that the conclusions reached by the studies are incorrect, and questions their methodology. And they point out that their work since 2009 has allowed billions of dollars to be channelled to the vital work of preserving forests.
The investigation found that:
Only a handful of Verra’s rainforest projects showed evidence of deforestation reductions, according to two studies, with further analysis indicating that 94% of the credits had no benefit to the climate.
The threat to forests had been overstated by about 400% on average for Verra projects, according to analysis of a 2022 University of Cambridge study.
Gucci, Salesforce, BHP, Shell, easyJet, Leon and the band Pearl Jam were among dozens of companies and organisations that have bought rainforest offsets approved by Verra for environmental claims.
Human rights issues are a serious concern in at least one of the offsetting projects. The Guardian visited a flagship project in Peru, and was shown videos that residents said showed their homes being cut down with chainsaws and ropes by park guards and police. They spoke of forced evictions and tensions with park authorities.
[…]
Two different groups of scientists – one internationally based, the other from Cambridge in the UK – looked at a total of about two-thirds of 87 Verra-approved active projects. A number were left out by the researchers when they felt there was not enough information available to fairly assess them.
The two studies from the international group of researchers found just eight out of 29 Verra-approved projects where further analysis was possible showed evidence of meaningful deforestation reductions.
The journalists were able to do further analysis on those projects, comparing the estimates made by the offsetting projects with the results obtained by the scientists. The analysis indicated about 94% of the credits the projects produced should not have been approved.
Credits from 21 projects had no climate benefit, seven had between 98% and 52% fewer than claimed using Verra’s system, and one had 80% more impact, the investigation found.
Separately, the study by the University of Cambridge team of 40 Verra projects found that while a number had stopped some deforestation, the areas were extremely small. Just four projects were responsible for three-quarters of the total forest that was protected.
The journalists again analysed these results more closely and found that, in 32 projects where it was possible to compare Verra’s claims with the study finding, baseline scenarios of forest loss appeared to be overstated by about 400%. Three projects in Madagascar have achieved excellent results and have a significant impact on the figures. If those projects are not included, the average inflation is about 950%.
[…]
Barbara Haya, the director of the Berkeley Carbon Trading Project, has been researching carbon credits for 20 years, hoping to find a way to make the system function. She said: “The implications of this analysis are huge. Companies are using credits to make claims of reducing emissions when most of these credits don’t represent emissions reductions at all.
“Rainforest protection credits are the most common type on the market at the moment. And it’s exploding, so these findings really matter. But these problems are not just limited to this credit type. These problems exist with nearly every kind of credit.
“One strategy to improve the market is to show what the problems are and really force the registries to tighten up their rules so that the market could be trusted. But I’m starting to give up on that. I started studying carbon offsets 20 years ago studying problems with protocols and programs. Here I am, 20 years later having the same conversation. We need an alternative process. The offset market is broken.”
The Defense Advanced Research Projects Agency has moved into the next phase of its Control of Revolutionary Aircraft with Novel Effectors program, or CRANE. The project is centered on an experimental uncrewed aircraft, which Aurora Flight Sciences is developing, that does not have traditional moving surfaces to control the aircraft in flight.
Aurora Flight Sciences’ CRANE design, which does not yet have an official X-plane designation or nickname, instead uses an active flow control (AFC) system to maneuver the aircraft using bursts of highly pressurized air. This technology could eventually find its way onto other military and civilian designs. It could have particularly significant implications when applied to future stealth aircraft.
A subscale wind tunnel model of Aurora Flight Sciences’ CRANE X-plane design. Aurora Flight Sciences
The Defense Advanced Research Projects Agency (DARPA) issued a press release regarding the last developments in the CRANE program yesterday. Aurora Flight Sciences, a subsidiary of Boeing, announced it had received a Phase 2 contract to continue work on this project back on December 12, 2022.
[…]
The design that Aurora ultimately settled on was more along the lines of a conventional plane. However, it has a so-called Co-Planar Joined Wing (CJW) planform consisting of two sets of wings attached to a single center fuselage that merge together at the tips, along with a twin vertical tail arrangement. As currently designed, the drone will use “banks” of nozzles installed at various points on the wings to maneuver in the air.
A wind tunnel model of one of Aurora Flight Sciences’ initial CRANE concepts with a joined wing. Aurora Flight Sciences
A wind tunnel model showing a more recent evolution of Aurora Flight Sciences’ CRANE X-plane design. Aurora Flight Sciences
The aircraft’s main engine arrangement is not entirely clear. An chin air intake under the forward fuselage together with a single exhaust nozzle at the rear seen in official concept art and on wind tunnel models would seem to point to a plan to power the aircraft with a single jet engine.
[…]
Interestingly, Aurora’s design “is configured to be a modular testbed featuring replaceable outboard wings and swappable AFC effectors. The modular design allows for testing of not only Aurora’s AFC effectors but also AFC effectors of various other designs,” a company press release issued in December 2022 said. “By expanding testing capabilities beyond Aurora-designed components, the program further advances its goal to provide the confidence needed for future aircraft requirements, both military and commercial, to include AFC-enabled capabilities.”
Aurora has already done significant wind tunnel testing of subscale models with representative AFC components as part of CRANE’s Phase 1. The company, along with Lockheed Martin, was chosen to proceed to that phase of the program in 2021.
“Using a 25% scale model, Aurora conducted tests over four weeks at a wind tunnel facility in San Diego, California. In addition to 11 movable conventional control surfaces, the model featured 14 AFC banks with eight fully independent controllable AFC air supply channels,” according to a press release the company put out in May 2022.
[…]
Getting rid of traditional control surfaces inherently allows for a design to be more aerodynamic, and therefore fly in a more efficient manner, especially at higher altitudes. An aircraft with an AFC system doesn’t need the various actuators and other components to move things like ailerons and rudders, offering new ways to reduce weight and bulk.
A DARPA briefing slide showing how the designs of traditional control surfaces, at their core, have remained largely unchanged after more than a century of other aviation technology developments. DARPA
A lighter and more streamlined aircraft design using an AFC system might be capable of greater maneuverability. This could be particularly true for uncrewed types that also do not have to worry about the physical limitations of a pilot.
The elimination of so many moving parts also means fewer things that can break, improving safety and reliability. This would do away with various maintenance and logistics requirements, too. It might make a military design more resilient to battle damage and easier to fix, as well.
[…]
The CRANE program and Aurora Flight Sciences’ design is of course not the first time AFC technology has been experimented with. U.K.-headquartered BAE Systems, which was another one of the participants in CRANE’s Phase 0, has been very publicly experimenting with various AFC concepts since at least 2010. The most recent of these developments was an AFC-equipped design called MAGMA. Described by BAE as a “large model,” this aircraft actually flew and you can read more about it here.
“Over the past several decades, the active flow control community has made significant advancements that enable the integration of active flow control technologies into advanced aircraft,” Richard Wlezein, the CRANE Program Manager at DARPA, said in a statement included in today’s press release. “We are confident about completing the design and flight test of a demonstration aircraft with AFC as the primary design consideration.”
Ammaar Reshi wrote and illustrated a children’s book in 72 hours using ChatGPT and Midjourney.
The book went viral on Twitter after it was met with intense backlash from artists.
Reshi said he respected the artists’ concerns but felt some of the anger was misdirected.
Ammaar Reshi was reading a bedtime story to his friend’s daughter when he decided he wanted to write his own.
Reshi, a product-design manager at a financial-tech company based in San Francisco, told Insider he had little experience in illustration or creative writing, so he turned to AI tools.
In December he used OpenAI’s new chatbot, ChatGPT, to write “Alice and Sparkle,” a story about a girl named Alice who wants to learn about the world of tech, and her robot friend, Sparkle. He then used Midjourney, an AI art generator, to illustrate it.
Just 72 hours later, Reshi self-published his book on Amazon’s digital bookstore. The following day, he had the paperback in his hands, made for free via another Amazon service called KDP.
“Alice and Sparkle” was meant to be a gift for his friends’ kids.Ammaar Reshi
He said he paid nothing to create and publish the book, though he was already paying for a $30-a-month Midjourney subscription.
Impressed with the speed and results of his project, Reshi shared the experience in a Twitter thread that attracted more than 2,000 comments and 5,800 retweets.
Reshi said he initially received positive feedback from users praising his creativity. But the next day, the responses were filled with vitriol.
“There was this incredibly passionate response,” Reshi said. “At 4 a.m. I was getting woken up by my phone blowing up every two minutes with a new tweet saying things like, ‘You’re scum’ and ‘We hate you.'”
Reshi said he was shocked by the intensity of the responses for what was supposed to be a gift for the children of some friends. It was only when he started reading through them that he discovered he had landed himself in the middle of a much larger debate.
Artists accused him of theft
Reshi’s book touched a nerve with some artists who argue that AI art generators are stealing their work.
Some artists claim their art has been used to train AI image generators like Midjourney without their permission. Users can enter artists’ names as prompts to generate art in their style.
An update to Lensa AI, a photo-editing tool, went viral on social-media last year after it launched an update that used AI to transform users’ selfies into works of art, leading artists to highlight their concerns about AI programs taking inspiration from their work without permission or payment.
“I had not read up on the issues,” Reshi said. “I realized that Lensa had actually caused this whole thing with that being a very mainstream app. It had spread that debate, and I was just getting a ton of hate for it.”
“I was just shocked, and honestly I didn’t really know how to deal with it,” he said.
Among the nasty messages, Reshi said he found people with reasonable and valid concerns.
“Those are the people I wanted to engage with,” he said. “I wanted a different perspective. I think it’s very easy to be caught up in your bubble in San Francisco and Silicon Valley, where you think this is making leaps, but I wanted to hear from people who thought otherwise.”
After learning more, he added to his Twitter thread saying that artists should be involved in the creation of AI image generators and that their “talent, skill, hard work to get there needs to be respected.”
He said he thinks some of the hate was misdirected at his one-off project, when Midjourney allows users to “generate as much art as they want.”
Reshi’s book was briefly removed from Amazon — he said Amazon paused its sales from January 6 to January 14, citing “suspicious review activity,” which he attributed to the volume of both five- and one-star reviews. He had sold 841 copies before it was removed.
Midjourney’s founder, David Holz, told Insider: “Very few images made on our service are used commercially. It’s almost entirely for personal use.”
He said that data for all AI systems are “sourced from broadly spidering the internet,” and most of the data in Midjourney’s model are “just photos.”
A creative process
Reshi said the project was never about claiming authorship over the book.
“I wouldn’t even call myself the author,” he said. “The AI is essentially the ghostwriter, and the other AI is the illustrator.”
But he did think the process was a creative one. He said he spent hours tweaking the prompts in Midjourney to try and achieve consistent illustrations.
Despite successfully creating an image of his heroine, Alice, to appear throughout the book, he wasn’t able to do the same for her robot friend. He had to use a picture of a different robot each time it appeared.
“It was impossible to get Sparkle the robot to look the same,” he said. “It got to a point where I had to include a line in the book that says Sparkle can turn into all kinds of robot shapes.”
Reshi’s children’s book stirred up anger on Twitter.Ammaar Reshi
Some people also attacked the quality of the book’s writing and illustrations.
“The writing is stiff and has no voice whatsoever,” one Amazon reviewer said. “And the art — wow — so bad it hurts. Tangents all over the place, strange fingers on every page, and inconsistencies to the point where it feels like these images are barely a step above random.”
Reshi said he would be hesitant to put out an illustrated book again, but he would like to try other projects with AI.
“I’d use ChatGPT for instance,” he said, saying there seem to be fewer concerns around content ownership than with AI image generators.
The goal of the project was always to gift the book to the two children of his friends, who both liked it, Reshi added.
“It worked with the people I intended, which was great,” he said.
European researchers have successfully tested a system that uses terawatt-level laser pulses to steer lighting toward a 26-foot rod. It’s not limited by its physical height, and can cover much wider areas — in this case, 590 feet — while penetrating clouds and fog.
The design ionizes nitrogen and oxygen molecules, releasing electrons and creating a plasma that conducts electricity. As the laser fires at a very quick 1,000 pulses per second, it’s considerably more likely to intercept lightning as it forms. In the test, conducted between June and September 2021, lightning followed the beam for nearly 197 feet before hitting the rod.
[…]
The University of Glasgow’s Matteo Clerici, who didn’t work on the project, noted to The Journal that the laser in the experiment costs about $2.17 billion dollars. The discoverers also plan to significantly extend the range, to the point where a 33-foot rod would have an effective coverage of 1,640 feet.
[…] Nanosys, a company whose quantum dot technology is in millions of TVs, offered to show me a top-secret prototype of a next-generation display. Not just any next-gen display, but one I’ve been writing about for years and which has the potential to dethrone OLED as the king of displays.
[…]
Electroluminescent quantum dots. These are even more advanced than the quantum dots found in the TVs of today. They could possibly replace LCD and OLED for phones and TVs. They have the potential of improved picture quality, energy savings and manufacturing efficiency. A simpler structure makes these displays theoretically so easy to produce, they could usher in a sci-fi world of inexpensive screens on everything from eyeglasses to windscreens and windows.
[…]
Quantum dots are tiny particles that when supplied with energy emit specific wavelengths of light. Different size quantum dots emit different wavelengths. Or to put it another way, some dots emit red light, others green, and others still, blue.
[…]
For the last few years, quantum dots have been used by TV manufacturers to boost the brightness and color of LCD TVs. The “Q” in QLED TV stands for “quantum.”
The quantum dots used in display tech up to this point are what’s called “photoluminescent.” They absorb light, then emit light.
[…]
The prototype I saw was completely different. No traditional LEDs and no OLED. Instead of using light to excite quantum dots into emitting light, it uses electricity. Nothing but quantum dots. Electroluminescent, aka direct-view, quantum dots.
[…]
Theoretically, this will mean thinner, more energy-efficient displays. It means displays that can be easier, as in cheaper, to manufacture.
[…]
Nanosys calls this direct-view, electroluminescent quantum dot tech “nanoLED”
[…]
Having what amounts to a simpler display structure, you can incorporate QD-based displays in a wider variety of situations. Or more specifically, on a wider variety of surfaces. Essentially, you can print an entire QD display onto a surface without the heat required by other “printable” tech.
What does this mean? Just about any flat or curved surface could be a screen
[…]
For instance, you could incorporate a screen onto the windshield of a car for a more elaborate, high-resolution, easy-to-see, heads-up display. Speed and navigation directions for sure, but how about augmented reality for safer nighttime driving with QD-display-enhanced lane markers and street signs?
[…]
AR glasses have been a thing, but they’re bulky, low resolution and, to be perfectly honest, lame. A QD display could be printed on the lenses themselves, requiring less elaborate electronics in the frames.
[…]
I think an obvious early use, despite how annoying it could be, would be bus or subway windows. These will initially be pitched by cities as a way to show people important info, but inevitably they’ll be used for advertising. That’s certainly not a knock against the tech, just how things work in the world.
[…]
5-10 years from now we’ll almost certainly have options for QD displays in our phones, probably in our living rooms, and possibly on our windshields and windows
Contrails — the wispy ice clouds trailing behind flying jets — “are surprisingly bad for the environment,” reports CNN: A study that looked at aviation’s contribution to climate change between 2000 and 2018 concluded that contrails create 57% of the sector’s warming impact, significantly more than the CO2 emissions from burning fuel. They do so by trapping heat that would otherwise be released into space.
And yet, the problem may have an apparently straightforward solution. Contrails — short for condensation trails, which form when water vapor condenses into ice crystals around the small particles emitted by jet engines — require cold and humid atmospheric conditions, and don’t always stay around for long. Researchers say that by targeting specific flights that have a high chance of producing contrails, and varying their flight path ever so slightly, much of the damage could be prevented.
Adam Durant, a volcanologist and entrepreneur based in the UK, is aiming to do just that. “We could, in theory, solve this problem for aviation within one or two years,” he says…. Of contrails’ climate impact, “80 or 90% is coming from only maybe five to 10% of all flights,” says Durant. “Simply redirecting a small proportion of flights can actually save the majority of the contrail climate impact….”
In 2021, scientists calculated that addressing the contrail problem would cost under $1 billion a year, but provide benefits worth more than 1,000 times as much. And a study from Imperial College London showed that diverting just 1.7% of flights could reduce the climate damage of contrails by as much as 59%.
Durant’s company Satavia is now testing its technology with two airlines and “actively looking for more airlines in 2023 to work with, as we start scaling up the service that we offer.”
Truly addressing the issue may require some changes to air traffic rules, Durant says — but he’s not the only one working on the issue. There’s also the task force of a non-profit energy think tank that includes six airlines, plus researchers and academics. “We could seriously reduce, say, 50% of the industry’s contrails impact by 2030,” Durant tells CNN. “That’s totally attainable, because we can do it with software and analytics.”
Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.
In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”
Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.
[…]
In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.
According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.
Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.
YouTube is rethinking its approach to colorful language after an uproar. In a statement to The Verge, the Google brand says it’s “making some adjustments” to a profanity policy it unveiled in November after receiving blowback from creators. The rule limits or removes ads on videos where someone swears within the first 15 seconds or has “focal usage” of rude words throughout, and is guaranteed to completely demonetize a clip if swearing either occurs in the first seven seconds or dominates the content.
While that policy wouldn’t necessarily be an issue by itself, YouTube has been applying the criteria to videos uploaded before the new rule took effect. As Kotakuexplains, YouTube has demonetized old videos for channels like RTGame. Producers haven’t had success appealing these decisions, and the company won’t let users edit these videos to pass muster.
Communication has also been a problem. YouTube doesn’t usually tell violators exactly what they did wrong, and creators tend to only learn about the updated policy after the service demonetizes their work. There are also concerns about inconsistency. Some videos are flagged while others aren’t, and a remonetized video might lose that income a day later. Even ProZD’s initial video criticizing the policy, which was designed to honor the rules, lost ad revenue after two days.
Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have successfully demonstrated that autonomous methods can discover new materials. The artificial intelligence (AI)-driven technique led to the discovery of three new nanostructures, including a first-of-its-kind nanoscale “ladder.” The research was published today in Science Advances..
The newly discovered structures were formed by a process called self-assembly, in which a material’s molecules organize themselves into unique patterns. Scientists at Brookhaven’s Center for Functional Nanomaterials (CFN) are experts at directing the self-assembly process, creating templates for materials to form desirable arrangements for applications in microelectronics, catalysis, and more. Their discovery of the nanoscale ladder and other new structures further widens the scope of self-assembly’s applications.
[…]
“gpCAM is a flexible algorithm and software for autonomous experimentation,” said Berkeley Lab scientist and co-author Marcus Noack. “It was used particularly ingeniously in this study to autonomously explore different features of the model.”
[…]
“An old school way of doing material science is to synthesize a sample, measure it, learn from it, and then go back and make a different sample and keep iterating that process,” Yager said. “Instead, we made a sample that has a gradient of every parameter we’re interested in. That single sample is thus a vast collection of many distinct material structures.”
Then, the team brought the sample to NSLS-II, which generates ultrabright X-rays for studying the structure of materials.
[…]
“One of the SMI beamline’s strengths is its ability to focus the X-ray beam on the sample down to microns,” said NSLS-II scientist and co-author Masa Fukuto. “By analyzing how these microbeam X-rays get scattered by the material, we learn about the material’s local structure at the illuminated spot. Measurements at many different spots can then reveal how the local structure varies across the gradient sample. In this work, we let the AI algorithm pick, on the fly, which spot to measure next to maximize the value of each measurement.”
As the sample was measured at the SMI beamline, the algorithm, without human intervention, created of model of the material’s numerous and diverse set of structures. The model updated itself with each subsequent X-ray measurement, making every measurement more insightful and accurate.
The Soft Matter Interfaces (SMI) beamline at the National Synchrotron Light Source II. Credit: Brookhaven National Laboratory
In a matter of hours, the algorithm had identified three key areas in the complex sample for the CFN researchers to study more closely. They used the CFN electron microscopy facility to image those key areas in exquisite detail, uncovering the rails and rungs of a nanoscale ladder, among other novel features.
From start to finish, the experiment ran about six hours. The researchers estimate they would have needed about a month to make this discovery using traditional methods.
“Autonomous methods can tremendously accelerate discovery,” Yager said. “It’s essentially ‘tightening’ the usual discovery loop of science, so that we cycle between hypotheses and measurements more quickly. Beyond just speed, however, autonomous methods increase the scope of what we can study, meaning we can tackle more challenging science problems.”
[…]
“We are now deploying these methods to the broad community of users who come to CFN and NSLS-II to conduct experiments,” Yager said. “Anyone can work with us to accelerate the exploration of their materials research. We foresee this empowering a host of new discoveries in the coming years, including in national priority areas like clean energy and microelectronics.”
Hundreds of millions of light-years away in a distant galaxy, a star orbiting a supermassive black hole is being violently ripped apart under the black hole’s immense gravitational pull. As the star is shredded, its remnants are transformed into a stream of debris that rains back down onto the black hole to form a very hot, very bright disk of material swirling around the black hole, called an accretion disc. This phenomenon—where a star is destroyed by a supermassive black hole and fuels a luminous accretion flare—is known as a tidal disruption event (TDE), and it is predicted that TDEs occur roughly once every 10,000 to 100,000 years in a given galaxy.
[…]
TDEs are usually “once-and-done” because the extreme gravitational field of the SMBH destroys the star, meaning that the SMBH fades back into darkness following the accretion flare. In some instances, however, the high-density core of the star can survive the gravitational interaction with the SMBH, allowing it to orbit the black hole more than once. Researchers call this a repeating partial TDE.
[…]
findings, published in Astrophysical Journal Letters, describe the capture of the star by a SMBH, the stripping of the material each time the star comes close to the black hole, and the delay between when the material is stripped and when it feeds the black hole again.
[…]
Once bound to the SMBH, the star powering the emission from AT2018fyk has been repeatedly stripped of its outer envelope each time it passes through its point of closest approach with the black hole. The stripped outer layers of the star form the bright accretion disk, which researchers can study using X-Ray and Ultraviolet /Optical telescopes that observe light from distant galaxies.
[…]
“Until now, the assumption has been that when we see the aftermath of a close encounter between a star and a supermassive black hole, the outcome will be fatal for the star, that is, the star is completely destroyed,” he says. “But contrary to all other TDEs we know of, when we pointed our telescopes to the same location again several years later, we found that it had re-brightened again. This led us to propose that rather than being fatal, part of the star survived the initial encounter and returned to the same location to be stripped of material once more, explaining the re-brightening phase.”
[…]
So how could a star survive its brush with death? It all comes down to a matter of proximity and trajectory. If the star collided head-on with the black hole and passed the event horizon—the threshold where the speed needed to escape the black hole surpasses the speed of light—the star would be consumed by the black hole. If the star passed very close to the black hole and crossed the so-called “tidal radius”—where the tidal force of the hole is stronger than the gravitational force that keeps the star together—it would be destroyed. In the model they have proposed, the star’s orbit reaches a point of closest approach that is just outside of the tidal radius, but doesn’t cross it completely: some of the material at the stellar surface is stripped by the black hole, but the material at its center remains intact.
[…]
More information: T. Wevers et al, Live to Die Another Day: The Rebrightening of AT 2018fyk as a Repeating Partial Tidal Disruption Event, The Astrophysical Journal Letters (2023). DOI: 10.3847/2041-8213/ac9f36
After a week of silence amid intense backlash, Dungeons & Dragons publisher Wizards of the Coast (WoTC) has finally addressed its community’s concerns about changes to the open gaming license. The open gaming license (OGL) has existed since 2000 and has made it possible for a diverse ecosystem of third-party creators to publish virtual tabletop software, expansion books and more. Many of these creators can make a living thanks to the OGL. But over the last week, a new version of the OGL leaked after WoTC sent it to some top creators. More than 66,000 Dungeons & Dragons fans signed an open letter under the name #OpenDnD ahead of an expected announcement, and waves of users deleted their subscriptions to D&D Beyond, WoTC’s online platform. Now, WoTC admitted that “it’s clear from the reaction that we rolled a 1.” Or, in non-Dungeons and Dragons speak, they screwed up.
“We wanted to ensure that the OGL is for the content creator, the homebrewer, the aspiring designer, our players, and the community — not major corporations to use for their own commercial and promotional purpose,” the company wrote in a statement. But fans have critiqued this language, since WoTC — a subsidiary of Hasbro — is a “major corporation” in itself. Hasbro earned $1.68 billion in revenue during the third quarter of 2022. TechCrunch spoke to content creators who had received the unpublished OGL update from WoTC. The terms of this updated OGL would force any creator making more than $50,000 to report earnings to WoTC. Creators earning over $750,000 in gross revenue would have to pay a 25% royalty. The latter creators are the closest thing that third-party Dungeons & Dragons content has to “major corporations” — but gross revenue is not a reflection of profit, so to refer to these companies in that way is a misnomer. […] The fan community also worried about whether WoTC would be allowed to publish and profit off of third-party work without credit to the original creator. Noah Downs, a partner at Premack Rogers and a Dungeons & Dragons livestreamer, told TechCrunch that there was a clause in the document that granted WoTC a perpetual, royalty-free sublicense to all third-party content created under the OGL.
Now, WoTC appears to be walking back both the royalty clause and the perpetual license. “What [the next OGL] will not contain is any royalty structure. It also will not include the license back provision that some people were afraid was a means for us to steal work. That thought never crossed our minds,” WoTC wrote in a statement. “Under any new OGL, you will own the content you create. We won’t.” WoTC claims that it included this language in the leaked version of the OGL to prevent creators from being able to “incorrectly allege” that WoTC stole their work. Throughout the document, WoTC refers to the document that certain creators received as a draft — however, creators who received the document told TechCrunch that it was sent to them with the intention of getting them to sign off on it. The backlash against these terms was so severe that other tabletop roleplaying game (TTRPG) publishers took action. Paizo is the publisher of Pathfinder, a popular game covered under WoTC’s original OGL. Paizo’s owner and presidents were leaders at Wizards of the Coast at the time that the OGL was originally published in 2000, and wrote in a statement yesterday that the company was prepared to go to court over the idea that WoTC could suddenly revoke the OGL license from existing projects. Along with other publishers like Kobold Press, Chaosium and Legendary Games, Paizo announced it would release its own Open RPG Creative License (ORC). “Ultimately, the collective action of the signatures on the open letter and unsubscribing from D&D Beyond made a difference. We have seen that all they care about is profit, and we are hitting their bottom line,” said Eric Silver, game master of Dungeons & Dragons podcast Join the Party. He told TechCrunch that WoTC’s response on Friday is “just a PR statement.”
“Until we see what they release in clear language, we can’t let our foot off the gas pedal,” Silver said. “The corporate playbook is wait it out until the people get bored; we can’t and we won’t.”
Players heard this message loud and clear, and began flocking to D&D Beyond’s website to cancel their subscriptions and delete their accounts. “DnDBegone” and “StopTheSub” joined OpenDnD as trending on Twitter as players disparaged Wizards of the Coast and parent company Hasbro over its draconian policies. The volume of players on the D&D Beyond website overloaded its servers, causing the Subscription Management page to temporarily crash.
The D&D Beyond page has since been restored, but further outages should be expected by fans wishing to make their voices heard. Thousands of players and content creators have already pulled their support of Dungeons and Dragons via D&D Beyond. Regardless of if Wizards of the Coast can revoke the old OGL, it is clear the bad faith it has earned will take a lot to clear.
A woman who released audio of her rapist’s confession said she wanted to show how “manipulative” abusers can be.
Ellie Wilson, 25, secretly captured Daniel McFarlane admitting to his crimes by setting her phone to record in her handbag.
McFarlane was found guilty of two rape charges and sentenced to five years in prison in July last year.
Ms Wilson said that despite audio and written confessions being used in court, the verdict was not unanimous.
The attacks took place between December 2017 and February 2018 when McFarlane was a medical student at the University of Glasgow.
Since the conviction Ms Wilson, who waived her anonymity, has campaigned on behalf of victims.
Earlier this week Ms Wilson, who was a politics student and champion athlete at the university at the time, released audio on Twitter of a conversation with McFarlane covertly captured the year after the attacks.
In the recording she asks him: “Do you not get how awful it makes me feel when you say ‘I haven’t raped you’ when you have?”
McFarlane replies: “Ellie, we have already established that I have. The people that I need to believe me, believe me. I will tell them the truth one day, but not today.”
When asked how he feels about what he has done, he says: “I feel good knowing I am not in prison.”
Image caption,
Ellie was a university athletics champion
The tweet has been viewed by more than 200,000 people.
Ms Wilson told BBC Scotland’s The Nine she had released the clip because many people wondered what evidence she had to secure a rape conviction.
She said the reaction had been “overwhelmingly positive” although a small minority had been very unkind.
And even with the recording of the confession being posted online some people were still saying ‘he didn’t do it’, Ms Wilson said.
In addition to the audio confession, Ms Wilson had text messages that pointed to McFarlane’s guilt yet she said she was still worried that it would not be enough to secure a conviction.
“The verdict was not unanimous,” she said.
“You can literally have a written confession, an audio confession and not everyone on the jury is going to believe you. I think that says a lot about society.”
Ms Wilson has previously said the experience she had in court was appalling.
You may not realize it in your day-to-day life, but we are all enveloped by a giant “superbubble” that was blown into space by the explosive deaths of a dozen-odd stars. Known as the Local Bubble, this structure extends for about 1,000 light years around the solar system, and is one of countless similar bubbles in our galaxy that are produced by the fallout of supernovas. Cosmic superbubbles have remained fairly mysterious for decades, but recent astronomical advances have finally exposed key details about their evolution and structure. Just within the past few years, researchers have mapped the geometry of the Local Bubble in three dimensions and demonstrated that its surface is an active site of star birth, because it captures gas and dust as it expands into space.
Now, a team of scientists has added another layer to our evolving picture of the Local Bubble by charting the magnetic field of the structure, which is thought to play a major role in star formation. Astronomers led by Theo O’Neill, who conducted the new research during a summer research program at the Center for Astrophysics at Harvard & Smithsonian (CfA), presented “the first-ever 3D map of a magnetic field over a superbubble” on Wednesday at the American Astronomical Society’s 241st annual meeting in Seattle, Washington. The team also unveiled detailed visualizations of their new map, bringing the Local Bubble into sharper focus.
“We think that the entire interstellar medium is really full of all these bubbles that are driven by various forms of feedback from, especially, really massive stars, where they’re outputting energy in some form or another into the space between the stars,” said O’Neill, who just received an undergraduate degree in astronomy-physics and statistics from the University of Virginia, in a joint call with their mentor Alyssa Goodman, an astronomer at CfA who co-authored the new research. […] “Now that we have this map, there’s a lot of cool science that can be done both by us, but hopefully by other people as well,” O’Neill said. “Since stars are clustered, it’s not as if the Sun is super special, and is in the Local Bubble because we’re just lucky. We know that the interstellar medium is full of bubbles like this, and there’s actually a lot of them nearby our own Local Bubble.” “One cool next step will be looking at places where the Local Bubble is nearby other feedback bubbles,” they concluded. “What happens when these bubbles interact, and how does that drive start formation in general, and the overall long-term evolution of galactic structures?”
Last year BMW took ample heat for its plans to turn heated seats into a costly $18 per month subscription in numerous countries. As we noted at the time, BMW is already including the hardware in new cars and adjusting the sale price accordingly. So it’s effectively charging users a new, recurring fee to enable technology that already exists in the car and consumers already paid for.
The move portends a rather idiotic and expensive future for consumers that’s arriving faster than you’d think. Other companies have also embraced the idea, and BMW continues to find new options to turn into subscription services. The latest: remote engine starting, which will soon cost car owners an additional $105 every year. On the plus side, there’s at least some flexibility with the pricing:
Most of these features are available through either a 1-month, 1-year, or 3-year subscription, or can be purchased outright for a one-time fee. Motorauthority reached out to BMW USA and found that the Remote Engine Start costs $10 for 1 month, $105 for 1 year, $250 for 3 years, or can be purchased for $330 for the life of the vehicle.
Again, this technology — and every other technology BMW is going to do this with — is already included in the higher-end price tag of BMW vehicles. It’s effectively double dipping (to please Wall Street’s insatiable desire for improved quarterly returns at any cost) dressed up as innovation. It’s not a whole lot better than your broadband ISP charging you $10-$25 every month for years for a modem worth $70.
Once companies get a taste of fatter revenues from charging customers for things they’ve already technically paid for, it won’t really stop without either regulatory intervention, or competitive pressure from automakers that avoid the model. BMW’s also turning a lot of other features into subscription services, like parking assist, video driver recording, and other features:
As for the Driver Recorder, it is available for $39 for 1 year, $99 for 3 years, and $149 for a one-time payment. Driving Assistant Plus with Stop&Go can be added for $20 for 1 month, $210 for 1 year, $580 for 3 years, and $950 with a one-time payment. As for Parking Assistant Professional, it is available for $5 for 1 month, $50 for 1 year, $130 for 3 years, or a one-time fee of $220.
Hackers are already fiddling with ways to enable the technology without paying a subscription fee, which will launch an entirely new cat and mouse game that, if automakers get too creative with their crackdowns (like claiming you’re voiding your warranty by enabling something you already own), could also run afoul of the FTC’s tougher stance on right to repair issues.
If it was for a service they offer, one for which BMW needs to expend energy and effort, eg updating maps, posting locations of speeding cams, etc, this would be fine. But you are paying again for hardware you already own and have already paid for once you bought the car.
The Journal spoke with Moderna CEO Stephane Bancel at the JP Morgan Healthcare Conference in San Francisco Monday, who said of the 400 percent price hike: “I would think this type of pricing is consistent with the value.”
Until now, the mRNA-based COVID-19 vaccines from Moderna and Pfizer-BioNTech have been purchased by the government and offered to Americans for free. In the latest federal contract from July, Moderna’s updated booster shot cost the government $26 per dose, up from $15–$16 per dose in earlier supply contracts, the Journal notes. Similarly, the government paid a little over $30 per dose for Pfizer-BioNTech’s vaccine this past summer, up from $19.50 per dose in contracts from 2020.
But now that the federal government is backing away from distributing the vaccines, their makers are moving to the commercial market—with price adjustments. Financial analysts had previously anticipated Pfizer would set the commercial price for its vaccine at just $50 per dose but were taken aback in October when Pfizer announced plans of a price between $110 and $130. Analysts then anticipated that Pfizer’s price would push Moderna and other vaccine makers to follow suit, which appears to be happening now.
Lawmakers have already lambasted Pfizer for the steep increase. In a letter sent last month to Pfizer CEO Albert Bourla, Senators Elizabeth Warren (D-Mass.) and Peter Welch (D-Vt.) called the price hike “pure and deadly greed” and accused the company of “unseemly profiteering.”
“We urge you to back off from your proposed price increases and ensure COVID-19 vaccines are reasonably priced and accessible to people across the United States,” they wrote.
The revelation that Moderna may match Pfizer’s price increase comes just a day after Moderna announced that its COVID-19 vaccine sales in 2022 totaled approximately $18.4 billion.
People living near airports that service piston-engine aircraft are disproportionately exposed to lead, a dangerous neurotoxin.
A study published this week in PNAS Nexus found that children living near the Reid-Hillview Airport in Santa Clara County, California, had elevated blood lead levels. They’ve pinpointed piston-engine aircrafts at airports like the one in California as a source of lead exposure for children.
Overall blood lead levels in U.S. children have gone down significantly in the last half century. Since the 1970s, policymakers have removed lead from everyday items like pipes, food cans, and vehicle gasoline. But despite those efforts, airports that house and service piston-engine aircraft, which mainly use leaded aviation fuel, continue to pollute the air. These are small, single- or two-propeller airplanes, such as training Cessna airplanes, small commercial aircraft, and the planes commonly seen trailing advertisement banners.
“Lead-formulated aviation gasoline (avgas) is the primary source of lead emissions in the United States today, consumed by over 170,000 piston-engine aircraft,” according to the new paper.
The researchers analyzed 14,000 blood samples, taken from 2011 to 2020 from children under 6 years old living near the California airport, to gauge exposure to lead. They found that blood lead levels increased the closer the children lived to the airport. Blood lead levels were also 2.18 times higher than a health department threshold of 4.5 micrograms per deciliter in children who lived east, or downwind, of the airport, according to the study.
Officials are still trying to figure out exactly what led to the Federal Aviation Administration system outage on Wednesday but have traced it to a corrupt file, which was first reported by CNN.
In a statement late Wednesday, the FAA said it was continuing to investigate the outage and “take all needed steps to prevent this kind of disruption from happening again.”
“Our preliminary work has traced the outage to a damaged database file. At this time, there is no evidence of a cyberattack,” the FAA said.
The FAA is still trying to determine whether any one person or “routine entry” into the database is responsible for the corrupted file, a government official familiar with the investigation into the NOTAM system outage told CNN.
Another source familiar with the Federal Aviation Administration operation described exclusively to CNN on Wednesday how the outage played out.
When air traffic control officials realized they had a computer issue late Tuesday, they came up with a plan, the source said, to reboot the system when it would least disrupt air travel, early on Wednesday morning.
But ultimately that plan and the outage led to massive flight delays and an unprecedented order to stop all aircraft departures nationwide.
The computer system that failed was the central database for all NOTAMs (Notice to Air Missions) nationwide. Those notices advise pilots of issues along their route and at their destination. It has a backup, which officials switched to when problems with the main system emerged, according to the source.
FAA officials told reporters early Wednesday that the issues developed in the 3 p.m. ET hour on Tuesday.
Officials ultimately found a corrupt file in the main NOTAM system, the source told CNN. A corrupt file was also found in the backup system.
In the overnight hours of Tuesday into Wednesday, FAA officials decided to shut down and reboot the main NOTAM system — a significant decision, because the reboot can take about 90 minutes, according to the source.
They decided to perform the reboot early Wednesday, before air traffic began flying on the East Coast, to minimize disruption to flights.
“They thought they’d be ahead of the rush,” the source said.
During this early morning process, the FAA told reporters that the system was “beginning to come back online,” but said it would take time to resolve.
The system, according to the source, “did come back up, but it wasn’t completely pushing out the pertinent information that it needed for safe flight, and it appeared that it was taking longer to do that.”
That’s when the FAA issued a nationwide ground stop at around 7:30 a.m. ET, halting all domestic departures.
Aircraft in line for takeoff were held before entering runways. Flights already in the air were advised verbally of the safety notices by air traffic controllers, who keep a static electronic or paper record at their desks of the active notices.
Transportation Secretary Pete Buttigieg ordered an after-action review and also said there was “no direct evidence or indication” that the issue was a cyberattack.
The source said the NOTAM system is an example of aging infrastructure due for an overhaul.
Natives in Tech, a US-based non-profit organization, has called upon the Apache Software Foundation (ASF) to change its name, out of respect for indigenous American peoples and to live up to its own code of conduct.
In a blog post, Natives in Tech members Adam Recvlohe, Holly Grimm, and Desiree Kane have accused the ASF of appropriating Indigenous culture for branding purposes.
Citing ASF founding member Brian Behlendorf’s description in the documentary “Trillions and Trillions Served” of how he wanted something more romantic than a tech term like “spider” and came up with “Apache” after seeing a documentary about Geronimo, the group said:
This frankly outdated spaghetti-Western ‘romantic’ presentation of a living and vibrant community as dead and gone in order to build a technology company ‘for the greater good’ is as ignorant as it is offensive.
The aggrieved trio challenged the ASF to make good on its code of conduct commitment to “be careful in the words that [they] choose” by choosing a new name. The group took issue with what they said was the suggestion that the Apache tribe exists only in a past historical context, citing eight federally recognized Native American tribes that bear the name.
Most of the world wide web runs on servers running Apache software. I’d say the tech apache outnumbers the Native American Apaches significantly and these guys are pulling on the tails of the Apache foundation to gather attention to their cause of putting themselves into the news.
CNET, a massively popular tech news outlet, has been quietly employing the help of “automation technology” — a stylistic euphemism for AI — on a new wave of financial explainer articles, seemingly starting around November of last year.
In the absence of any formal announcement or coverage, it appears that this was first spotted by online marketer Gael Breton in a tweet on Wednesday.
The articles are published under the unassuming appellation of “CNET Money Staff,” and encompass topics like “Should You Break an Early CD for a Better Rate?” or “What is Zelle and How Does It Work?”
That byline obviously does not paint the full picture, and so your average reader visiting the site likely would have no idea that what they’re reading is AI-generated. It’s only when you click on “CNET Money Staff,” that the actual “authorship” is revealed.
“This article was generated using automation technology,” reads a dropdown description, “and thoroughly edited and fact-checked by an editor on our editorial staff.”
Since the program began, CNET has put out around 73 AI-generated articles. That’s not a whole lot for a site that big, and absent an official announcement of the program, it appears leadership is trying to keep the experiment as lowkey as possible. CNET did not respond to questions about the AI-generated articles.
[…]
Based on Breton’s observations, though, some of the articles appear to be pulling in large amounts of traffic
[…]
But AI usage is not limited to those kinds of bottom of the barrel outlets. Even the prestigious news agency The Associated Presshas been using AI since 2015 to automatically write thousands and thousands of earnings reports. The AP has even proudly proclaimed itself as “one of the first news organizations to leverage artificial intelligence.”
It’s worth noting, however, that the AP‘s auto-generated material appears to be essentially filling in blanks in predetermined formats, whereas the more sophisticated verbiage of CNET‘s publications suggests that it’s using something more akin to OpenAI’s GPT-3.
The source article is the usual fearmongering against AI and you must check / care if it was written by a human, but to me it seems that this is a good way of partnering current AI with humans to create good content.
On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker’s emotional tone.
Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models like GPT-3.
Microsoft calls VALL-E a “neural codec language model,” and it builds off of a technology called EnCodec, which Meta announced in October 2022. Unlike other text-to-speech methods that typically synthesize speech by manipulating waveforms, VALL-E generates discrete audio codec codes from text and acoustic prompts. It basically analyzes how a person sounds, breaks that information into discrete components (called “tokens”) thanks to EnCodec, and uses training data to match what it “knows” about how that voice would sound if it spoke other phrases outside of the three-second sample. Or, as Microsoft puts it in the VALL-E paper:
To synthesize personalized speech (e.g., zero-shot TTS), VALL-E generates the corresponding acoustic tokens conditioned on the acoustic tokens of the 3-second enrolled recording and the phoneme prompt, which constrain the speaker and content information respectively. Finally, the generated acoustic tokens are used to synthesize the final waveform with the corresponding neural codec decoder.
Microsoft trained VALL-E’s speech-synthesis capabilities on an audio library, assembled by Meta, called LibriLight. It contains 60,000 hours of English language speech from more than 7,000 speakers, mostly pulled from LibriVox public domain audiobooks. For VALL-E to generate a good result, the voice in the three-second sample must closely match a voice in the training data.
On the VALL-E example website, Microsoft provides dozens of audio examples of the AI model in action. Among the samples, the “Speaker Prompt” is the three-second audio provided to VALL-E that it must imitate.
In the quest to find the outer limits of our galaxy, astronomers have discovered over 200 stars that form the Milky Way’s edge, the most distant of which is over one million light-years away—nearly halfway to the Andromeda galaxy.
The 208 stars the researchers identified are known as RR Lyrae stars, which are stars with a brightness that can change as viewed from Earth. These stars are typically old and brighten and dim at regular intervals, which is a mechanism that allows scientists to calculate how far away they are. By calculating the distance to these RR Lyrae stars, the team found that the farthest of the bunch was located about halfway between the Milky Way and the Andromeda galaxy, one of our cosmic next-door neighbors.
“This study is redefining what constitutes the outer limits of our galaxy,” said Raja GuhaThakurta in a press release. GuhaThakurta is professor and chair of astronomy and astrophysics at the University of California Santa Cruz. “Our galaxy and Andromeda are both so big, there’s hardly any space between the two galaxies.”
Illustration: NASA, ESA, AND A. FEILD (STSCI)
The Milky Way galaxy consists of a few different parts, the primary of which is a thin, spiral disk about 100,000 light-years across. Our home solar system sits on one of the arms of this disk. An inner and outer halo surround the disk, and these halos contain some of the oldest stars in our galaxy.
Previous studies have placed the edge of the outer halo at 1 million light-years from the Milky Way’s center, but based on the new work, the edge of this halo should be about 1.04 million light-years from the galactic center. Yuting Feng, a doctoral student at the university working with GuhaThakurta, led the study and is presenting the findings this week at the American Astronomical Society meeting in Seattle.
While using the Atacama Large Millimeter/submillimeter Array (ALMA) to study the masers around oddball star MWC 349A scientists discovered something unexpected: a previously unseen jet of material launching from the star’s gas disk at impossibly high speeds. What’s more, they believe the jet is caused by strong magnetic forces surrounding the star.
The discovery could help researchers to understand the nature and evolution of massive stars and how hydrogen masers are formed in space. The new observations were presented today (January 9) in a press conference at the 241st meeting of the American Astronomical Society (AAS) in Seattle, Washington.
Located roughly 3,900 light-years away from Earth in the constellation Cygnus, MWC 349A’s unique features make it a hot spot for scientific research in optical, infrared, and radio wavelengths. The massive star—roughly 30 times the mass of the sun—is one of the brightest radio sources in the sky, and one of only a handful of objects known to have hydrogen masers. These masers amplify microwave radio emissions, making it easier to study processes that are typically too small to see. It is this unique feature that allowed scientists to map MWC 349A’s disk in detail for the first time.
“A maser is like a naturally occurring laser,” said Sirina Prasad, an undergraduate research assistant at the Center for Astrophysics | Harvard & Smithsonian (CfA), and the primary author of the paper. “It’s an area in outer space that emits a really bright kind of light. We can see this light and trace it back to where it came from, bringing us one step closer to figuring out what’s really going on.”
The massive star MWC 349A is one of the brightest radio sources in the sky. But, at 3,900 light-years away from Earth, scientists needed help to see what’s really going on, and in this case, to discover a jet of material blasting out from the star’s gas disk at 500 km/s. Previously hidden amongst the winds flowing out from the star, the jet was discovered using the combined resolving power of ALMA’s Band 6 (right) and Band 7 (left), and hydrogen masers— naturally occurring lasers that amplify microwave radio emissions, shown here in this ALMA science image. The revelation may help scientists to better understand the nature and evolution of massive stars. Credit: ALMA (ESO/NAOJ/NRAO), S. Prasad/CfA
Leveraging the resolving power of ALMA’s Band 6, developed by the U.S. National Science Foundation’s National Radio Astronomy Observatory (NRAO), the team was able to use the masers to uncover the previously unseen structures in the star’s immediate environment. Qizhou Zhang, a senior astrophysicist at CfA, and the project’s principal investigator added, “We used masers generated by hydrogen to probe the physical and dynamic structures in the gas surrounding MWC 349A and revealed a flattened gas disk with a diameter of 50 au, approximately the size of the Solar System, confirming the near-horizontal disk structure of the star. We also found a fast-moving jet component hidden within the winds flowing away from the star.”
The observed jet is ejecting material away from the star at a blistering 500 km per second. That’s akin to traveling the distance between San Diego, California, and Phoenix, Arizona, in the literal blink of an eye. According to researchers, it is probable that a jet moving this fast is being launched by a magnetic force. In the case of MWC 349A, that force could be a magnetohydrodynamic wind—a type of wind whose movement is dictated by the interplay between the star’s magnetic field and gases present in its surrounding disk.
“Our previous understanding of MWC 349A was that the star was surrounded by a rotating disk and photo-evaporating wind. Strong evidence for an additional collimated jet had not yet been seen in this system. Although we don’t yet know for certain where it comes from or how it is made, it could be that a magnetohydrodynamic wind is producing the jet, in which case the magnetic field is responsible for launching rotating material from the system,” said Prasad. “This could help us to better understand the disk-wind dynamics of MWC 349A, and the interplay between circumstellar disks, winds, and jets in other star systems.”
More information: These results will be presented during a press conference at the 241st proceedings of the American Astronomical Society on Monday, January 9th at 2:15pm Pacific Standard Time (PST).
Citizen, the provocative crime-reporting app formerly known as Vigilante, is in the news again for all the wrong reasons. On Thursday evening, it doxxed singer Billie Eilish, publishing her address to thousands of people after an alleged burglary at her home.
Shortly after the break-in, the app notified users of a break-in in Los Angeles’ Highland Park neighborhood — including the home’s address. As reported by Vice, Citizen’s message was updated at 9:41 PM to state that the house belonged to Eilish. According to Citizen’s metrics, the alert was sent to 178,000 people and viewed by nearly 78,000. On Friday morning, Citizen updated the app’s description of the incident, replacing the precise address with a nearby cross-street.
Although celebrity home addresses are often publicly available (usually on seedy websites specializing in such invasive nonsense), a popular app pushing the home address of one of pop music’s biggest stars to thousands of users is… new. Unfortunately, it’s also just the latest potentially destructive move from Citizen.
When Citizen launched as Vigilante in 2016, Apple quickly pulled the title from the App Store based on concerns about its encouraging users to thrust themselves into dangerous situations. So it rebranded as Citizen with a new focus on safety, and Apple re-opened its gates. The app began advising users to avoid incidents in progress while providing tools to help those caught in a dangerous situation. Although that sounds reasonable, at least one episode reveals an overzealousness company prioritizing attention and profit over social responsibility.
Citizen
In May 2021, CEO Andrew Frame ordered the launch of a live stream, encouraging the app’s users to hunt down a suspected wildfire arsonist (based on a tip from an LAPD sergeant and emails from residents questioned by police). He offered a $10,000 bounty for finding the suspect, which grew to $30,000 later in the evening. As the hunt continued, the CEO reportedly grew more frantic, with one of his internal Slack conversations encouraging the team to “get this guy before midnight” in an ecstatic, all-caps message.
Google has agreed to pay $9.5 million to settle a lawsuit brought by Washington DC Attorney General Karl Racine, who accused the company earlier this year of “deceiving users and invading their privacy.” Google has also agreed to change some of its practices, primarily concerning how it informs users about collecting, storing and using their location data.
“Google leads consumers to believe that consumers are in control of whether Google collects and retains information about their location and how that information is used,” the complaint, which Racine filed in January, read. “In reality, consumers who use Google products cannot prevent Google from collecting, storing and profiting from their location.”
Racine’s office also accused Google of employing “dark patterns,” which are design choices intended to deceive users into carrying out actions that don’t benefit them. Specifically, the AG’s office claimed that Google repeatedly prompted users to switch in location tracking in certain apps and informed them that certain features wouldn’t work properly if location tracking wasn’t on. Racine and his team found that location data wasn’t even needed for the app in question. They asserted that Google made it “impossible for users to opt out of having their location tracked.”
The $9.5 million payment is a paltry one for Google. Last quarter, it took parent company Alphabet under 20 minutes to make that much in revenue. The changes that the company will make to its practices as part of the settlement may have a bigger impact.
Folks who currently have certain location settings on will receive notifications telling them how they can disable each setting, delete the associated data and limit how long Google can keep that information. Users who set up a new Google account will be informed which location-related account settings are on by default and offered the chance to opt out.
Google will need to maintain a webpage that details its location data practices and policies. This will include ways for users to access their location settings and details about how each setting impacts Google’s collection, retention or use of location data.
Moreover, Google will be prevented from sharing a person’s precise location data with a third-party advertiser without the user’s explicit consent. The company will need to delete location data “that came from a device or from an IP address in web and app activity within 30 days” of obtaining the information
NHS England has extended its contract with US spy-tech biz Palantir for the system built at the height of the pandemic to give it time to resolve the twice-delayed procurement of a data platform to support health service reorganization and tackle the massive care backlog.
The contract has already been subject to the threat of a judicial review, after which NHS England – a non-departmental government body – agreed to three concessions, including the promise of public consultation before extending the contract.
Campaigners and legal groups are set to mount legal challenges around separate, but related, NHS dealing with Palantir.
In a notice published yesterday, the NHS England said the contract would be extended until September 2023 in a deal worth £11.5 million ($13.8 million).
NHS England has been conducting a £360 million ($435 million) procurement of a separate, but linked, Federated Data Platform (FDP), a deal said to be a “must-win” for Palantir, a US data management company which cut its teeth working for the CIA and controversial US immigration agency ICE.
The contract notice for FDP, which kicks off the official competition, was originally expected in June 2022 but was delayed until September 2022, when NHS England told The Register it would be published. The notice has yet to appear