ChatGPT: Study shows AI can produce academic papers good enough for journals – just as some ban it

Some of the world’s biggest academic journal publishers have banned or curbed their authors from using the advanced chatbot, ChatGPT. Because the bot uses information from the internet to produce highly readable answers to questions, the publishers are worried that inaccurate or plagiarized work could enter the pages of academic literature.

Several researchers have already listed the chatbot as a co-author on academic studies, and some publishers have moved to ban this practice. But the editor-in-chief of Science, one of the top scientific journals in the world, has gone a step further and forbidden any use of text from the program in submitted papers.

[…]

We first asked ChatGPT to generate the standard four parts of a research study: research idea, literature review (an evaluation of previous academic research on the same topic), dataset, and suggestions for testing and examination. We specified only the broad subject and that the output should be capable of being published in “a good finance journal.”

This was version one of how we chose to use ChatGPT. For version two, we pasted into the ChatGPT window just under 200 abstracts (summaries) of relevant, existing research studies.

We then asked that the program take these into account when creating the four research stages. Finally, for version three, we added “domain expertise”—input from academic researchers. We read the answers produced by the computer program and made suggestions for improvements. In doing so, we integrated our expertise with that of ChatGPT.

We then requested a panel of 32 reviewers each review one version of how ChatGPT can be used to generate an academic study. Reviewers were asked to rate whether the output was sufficiently comprehensive, correct, and whether it made a contribution sufficiently novel for it to be published in a “good” academic finance journal.

The big take-home lesson was that all these studies were generally considered acceptable by the expert reviewers. This is rather astounding: a chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas—questions to which nobody yet has solid answers.

Strengths and weaknesses

The results also highlight some potential strengths and weaknesses of ChatGPT. We found that different research sections were rated differently. The research idea and the dataset tended to be rated highly. There was a lower, but still acceptable, rating for the literature reviews and testing suggestions.

[…]

A relative weakness of the platform became apparent when the task was more complex—when there are too many stages to the conceptual process. Literature reviews and testing tend to fall into this category. ChatGPT tended to be good at some of these steps but not all of them. This seems to have been picked up by the reviewers.

We were, however, able to overcome these limitations in our most advanced version (version three), where we worked with ChatGPT to come up with acceptable outcomes. All sections of the advanced research study were then rated highly by reviewers, which suggests the role of is not dead yet.

[…]

This has some clear ethical implications. Research integrity is already a pressing problem in academia and websites such as RetractionWatch convey a steady stream of fake, plagiarized, and just plain wrong, research studies. Might ChatGPT make this problem even worse?

It might, is the short answer. But there’s no putting the genie back in the bottle. The technology will also only get better (and quickly). How exactly we might acknowledge and police the role of ChatGPT in research is a bigger question for another day. But our findings are also useful in this regard—by finding that the ChatGPT study version with researcher expertise is superior, we show the input of human researchers is still vital in acceptable research.

For now, we think that researchers should see ChatGPT as an aide, not a threat.

[…]

 

Source: ChatGPT: Study shows AI can produce academic papers good enough for journals—just as some ban it

MusicLM generates music from text descriptions – pretty awesome

We introduce MusicLM, a model generating high-fidelity music from text descriptions such as “a calming violin melody backed by a distorted guitar riff”. MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption. To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.

Source: MusicLM

An ALS patient set a record communicating through a brain implant: 62 words per minute

Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig’s disease, which causes progressive paralysis. She can still make sounds, but her words have become unintelligible, leaving her reliant on a writing board or iPad to communicate.

Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases like “I don’t own my home” and “It’s just tough” at a rate approaching normal speech.

That is the claim in a paper published over the weekend on the website bioRxiv by a team at Stanford University. The study has not been formally reviewed by other researchers. The scientists say their volunteer, identified only as “subject T12,” smashed previous records by using the brain-reading implant to communicate at a rate of 62 words a minute, three times the previous best.

[…]

The brain-computer interfaces that Shenoy’s team works with involve a small pad of sharp electrodes embedded in a person’s motor cortex, the brain region most involved in movement. This allows researchers to record activity from a few dozen neurons at once and find patterns that reflect what motions someone is thinking of, even if the person is paralyzed.

In previous work, paralyzed volunteers have been asked to imagine making hand movements. By “decoding” their neural signals in real time, implants have let them steer a cursor around a screen, pick out letters on a virtual keyboard, play video games, or even control a robotic arm.

In the new research, the Stanford team wanted to know if neurons in the motor cortex contained useful information about speech movements, too. That is, could they detect how “subject T12” was trying to move her mouth, tongue, and vocal cords as she attempted to talk?

These are small, subtle movements, and according to Sabes, one big discovery is that just a few neurons contained enough information to let a computer program predict, with good accuracy, what words the patient was trying to say. That information was conveyed by Shenoy’s team to a computer screen, where the patient’s words appeared as they were spoken by the computer.

[…]

Shenoy’s group is part of a consortium called BrainGate that has placed electrodes into the brains of more than a dozen volunteers. They use an implant called the Utah Array, a rigid metal square with about 100 needle-like electrodes.

Some companies, including Elon Musk’s brain interface company, Neuralink, and a startup called Paradromics, say they have developed more modern interfaces that can record from thousands—even tens of thousands—of neurons at once.

While some skeptics have asked whether measuring from more neurons at one time will make any difference, the new report suggests it will, especially if the job is to brain-read complex movements such as speech.

The Stanford scientists found that the more neurons they read from at once, the fewer errors they made in understanding what “T12” was trying to say.

“This is a big deal, because it suggests efforts by companies like Neuralink to put 1,000 electrodes into the brain will make a difference, if the task is sufficiently rich,” says Sabes, who previously worked as a senior scientist at Neuralink.

Source: An ALS patient set a record communicating through a brain implant: 62 words per minute | MIT Technology Review

This teacher has adopted ChatGPT into the syllabus

[…]

Ever since the chatbot ChatGPT launched in November, educators have raised concerns it could facilitate cheating.

Some school districts have banned access to the bot, and not without reason. The artificial intelligence tool from the company OpenAI can compose poetry. It can write computer code. It can maybe even pass an MBA exam.

One Wharton professor recently fed the chatbot the final exam questions for a core MBA course and found that, despite some surprising math errors, he would have given it a B or a B-minus in the class.

And yet, not all educators are shying away from the bot.

This year, Mollick is not only allowing his students to use ChatGPT, they are required to. And he has formally adopted an A.I. policy into his syllabus for the first time.

He teaches classes in entrepreneurship and innovation, and said the early indications were the move was going great.

“The truth is, I probably couldn’t have stopped them even if I didn’t require it,” Mollick said.

This week he ran a session where students were asked to come up with ideas for their class project. Almost everyone had ChatGPT running and were asking it to generate projects, and then they interrogated the bot’s ideas with further prompts.

“And the ideas so far are great, partially as a result of that set of interactions,” Mollick said.

[…]

He readily admits he alternates between enthusiasm and anxiety about how artificial intelligence can change assessments in the classroom, but he believes educators need to move with the times.

“We taught people how to do math in a world with calculators,” he said. Now the challenge is for educators to teach students how the world has changed again, and how they can adapt to that.

Mollick’s new policy states that using A.I. is an “emerging skill”; that it can be wrong and students should check its results against other sources; and that they will be responsible for any errors or omissions provided by the tool.

And, perhaps most importantly, students need to acknowledge when and how they have used it.

“Failure to do so is in violation of academic honesty policies,” the policy reads.

[…]

Source: ‘Everybody is cheating’: Why this teacher has adopted an open ChatGPT policy : NPR

ChatGPT Is Now Finding, Fixing Bugs in Code

AI bot ChatGPT has been put to the test on a number of tasks in recent weeks, and its latest challenge comes courtesy of computer science researchers from Johannes Gutenberg University and University College London, who find(Opens in a new window) that ChatGPT can weed out errors with sample code and fix it better than existing programs designed to do the same.

Researchers gave 40 pieces of buggy code to four different code-fixing systems: ChatGPT, Codex, CoCoNut, and Standard APR. Essentially, they asked ChatGPT: “What’s wrong with this code?” and then copy and pasted it into the chat function.

On the first pass, ChatGPT performed about as well as the other systems. ChatGPT solved 19 problems, Codex solved 21, CoCoNut solved 19, and standard APR methods figured out seven. The researchers found its answers to be most similar to Codex, which was “not surprising, as ChatGPT and Codex are from the same family of language models.”

However, the ability to, well, chat with ChatGPT after receiving the initial answer made the difference, ultimately leading to ChatGPT solving 31 questions, and easily outperforming the others, which provided more static answers.

[…]

They found that ChatGPT was able to solve some problems quickly, while others took more back and forth. “ChatGPT seems to have a relatively high variance when fixing bugs,” the study says. “For an end-user, however, this means that it can be helpful to execute requests multiple times.”

For example, when the researchers asked the question pictured below, they expected ChatGPT to recommend replacing n^=n-1 with n&=n-1, but the first thing ChatGPT said was, “I’m unable to tell if the program has a bug without more information on the expected behavior.” On ChatGPT’s third response, after more prompting from researchers, it found the problem.

Code for ChatGPT Study

(Credit: Dominik Sobania, Martin Briesch, Carol Hanna, Justyna Petke)

However, when PCMag entered the same question into ChatGPT, it answered differently. Rather than needing to tell it what the expected behavior is, it guessed what it was.

[…]

 

Source: Watch Out, Software Engineers: ChatGPT Is Now Finding, Fixing Bugs in Code

How The Friedman Doctrine Leads To The Enshittification Of All Things

We recently wrote about Cory Doctorow’s great article on how the “enshittification” of social media (mainly Facebook and Twitter) was helping to lower the “switching costs” for people to try something new. In something of a follow up-piece on his Pluralistic site, Doctorow explores the process through which basically all large companies eventually hit the “enshittification” stage, and it’s (1) super insightful (2) really useful to think about, and (3) fit with a bunch of other ideas I’ve been thinking about of late. The opening paragraph is one for the ages:

Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

He provides a lot more details about this process. In the beginning, companies need users and become successful by catering to their needs:

When a platform starts, it needs users, so it makes itself valuable to users. Think of Amazon: for many years, it operated at a loss, using its access to the capital markets to subsidize everything you bought. It sold goods below cost and shipped them below cost. It operated a clean and useful search. If you searched for a product, Amazon tried its damndest to put it at the top of the search results.

And, especially in the venture-backed world, this is often easier to do, because there isn’t much of a demand for profits (sometimes even for revenue), as the focus is on user growth. So, companies take all that VC cash and use it to subsidize things, and… that’s often really great for consumers.

But, eventually, these companies have to pay back the VCs in the form of selling out to a bigger company or, preferably, through a big IPO, taking the company public, giving it access to the public equity markets, and… then being at the whims of Wall Street. This is the part that Cory doesn’t mention in his piece, but which I’ve been thinking quite a lot about lately, and I do think is an important piece to the puzzle.

Once you go public, and you have that quarterly drumbeat from Wall Street where pretty much all that matters is revenue and profit growth. Indeed, it’s long forgotten now, but Jeff Bezos and Amazon actually were a rare company that kind of bucked that trend, and for a while at least, told Wall Street not to expect such things, as it was going to invest more and more deeply in serving its customers, and Wall Street punished Bezos for it. It’s long forgotten now, but Wall Street absolutely hated Amazon Prime, which locked in customer loyalty, but which they thought was a huge waste of money. The same was true of Amazon Web Services, which has become a huge revenue driver for the company.

But Wall Street is not visionary. Wall Street does not believe in long term strategy. It believes in hitting your short term ever increasing numbers every three months. Or it will punish you.

And this, quite frequently, leads to the process that Cory lays out in his enshittification gravity well. Because once you’ve gone public, even if you have executives who still want to focus on pleasing users and customers, eventually any public company is also going to have other executives, often with Wall Street experience, who talk about the importance of keeping Wall Street happy. They’ll often quote Milton Friedman’s dumbest idea: that the only fiduciary duty company executives have is to increase their profits for shareholders.

But one of the major problems with this that I’ve discussed for years is that even if you believe (ridiculously) that your only goal is to increase profits for shareholders, that leaves out one very important variable: over what time frame?

This goes back to something I wrote more than 15 years ago, talking about Craigslist. At the time, Craigslist was almost certainly the most successful company in the world in terms of profits per employee. It was making boatloads of cash with like a dozen employees. But the company’s CEO (who was not Craig, by the way) had mentioned that the company wasn’t focused on “maximizing revenue.” After all, most of Craigslist is actually free. There are only a few categories that charge, and they tend to be the most commercial ones (job postings). And this resulted in some arguing that the company lacked a capitalist instinct, and somehow this was horrible.

But, as I wrote at the time, this left out the variable of time. Because maximizing revenue in the short term (i.e., in the 3 month window that Wall Street requires) often means sacrificing long term sustainability and long term profits. That’s because if you’re only looking at the next quarter (or, perhaps, the next two to four quarters if we’re being generous) then you’re going to be tempted to squeeze more of the value out of your customers, to “maximize revenue” or “maximize profits for shareholders.”

In Cory’s formulation, then, this takes us to stage two of the enshittification process: abusing your users to make things better for your business customers. That’s because “Wall Street” and the whole “fiduciary duty to your shareholders” argues that if you’re not squeezing your customers for more value — or more “average revenue per user” (ARPU) — then you’re somehow not living up to your fiduciary duty. But that ignores that doing so often sucks for your customers, and it opens a window for them to look elsewhere and go there. If that’s a realistic option, of course.

Of course, many companies hang on through this stage, partly through inertia, but also frequently through the lack of as comprehensive a competitive ecosystem. And, eventually, they’ve reached a kind of limit in how much they’ve abused their users to please their business customers which, in turn, allows them to please Wall Street and its short-term focus.

So that brings us to Cory’s stage three of the enshittification. In which they start seeking to capture all of the value.

For years, Tim O’Reilly has (correctly) argued that good companies should “create more value than they capture.” The idea here is pretty straightforward: if you have a surplus, and you share more of it with others (users and partners) that’s actually better for your long term viability, as there’s more and more of a reason for those users, partners, customers, etc. to keep doing business with you. Indeed, in that link above (from a decade ago), O’Reilly provides an example that could have come straight out of Cory’s enshittification essay:

“Consider Microsoft,” O’Reilly told MIT researcher Andrew McAfee during an interview at SXSWi, “whose vision of a computer on every desk and in every home changed the world of computing forever and created a rich ecosystem for developers. As Microsoft’s growth stalled, they gradually consumed more and more of the opportunity for them- selves, and innovators moved elsewhere, to the Internet.”

And this is what happens. At some point, after abusing your users to please your business goals, you hit some fairly natural limits.

But Wall Street and the Friedman doctrine never stop screaming for more. You must “maximize” your profits for shareholders in that short term window, even if it means you’re going to destroy your shareholders in the long term. And thus, you see any excess value as “money left on the table,” or money that you need to take.

The legacy copyright industry is the classic example of this. We’ve provided plenty of examples over they years, but back when the record labels were struggling to figure out how to adapt to the internet, every few years some new solution came along, like music-based video games (e.g., Guitar Hero), and they’d be crazy successful, and make everyone lots of money… and then the old record label execs would come in and scream about how they should be getting all that money, eventually killing the golden goose that was suddenly giving them all this free money for doing nothing.

And, thus, that last leg of the enshittification curve tends to be when these legacy industries refuse to play nice with the wider ecosystem (often the ones enabling your overall business to grow) and seek to capture all the value for themselves, without realizing that this is how companies die.

Of course, one recent example of this is Elon killing off third party Twitter apps. While no one has officially admitted to it, basically everyone is saying it’s because those apps didn’t show ads to users, and Elon is so desperate for ad revenue, he figured he should kill off those apps to “force” users onto his enshittified apps instead.

But, of course, all it’s really doing is driving not just many of the Twitter power users away, but also shutting down the developers who were actually doing more to make Twitter even more useful. In trying to grab more of the pie, Elon is closing off the ability to grow the pie much bigger.

This is one of the reasons that both Cory and I keep talking about the importance of interoperability. It not only allows users to break out of silos where this is happening, but it helps combat the enshittification process. It forces companies to remain focused on providing value and surplus, to their users, rather than chasing Wall Street’s latest demands.

The temptation to enshittify is magnified by the blocks on interoperability: when Twitter bans interoperable clients, nerfs its APIs, and periodically terrorizes its users by suspending them for including their Mastodon handles in their bios, it makes it harder to leave Twitter, and thus increases the amount of enshittification users can be force-fed without risking their departure.

But, as he notes, this strategy only works for so long:

An enshittification strategy only succeeds if it is pursued in measured amounts. Even the most locked-in user eventually reaches a breaking-point and walks away. The villagers of Anatevka in Fiddler on the Roof tolerated the cossacks’ violent raids and pogroms for years, until they didn’t, and fled to Krakow, New York and Chicago

There are ways around this, but it’s not easy. Cory and I push for interoperability (including adversarial interoperability) because we know in the long run it actually makes things better for users, and creates incentives for companies and services not to treat their users as an endless piggybank that can be abused at will. Cory frames it as a “freedom to exit.”

And policymakers should focus on freedom of exit – the right to leave a sinking platform while continuing to stay connected to the communities that you left behind, enjoying the media and apps you bought, and preserving the data you created

But, there’s more that can be done as well, and it should start with pushing back on the Friedman Doctrine of maximizing shareholder profits as the only fiduciary duty. We’ve seen some movement against that view with things like B corps., that allow companies to explicitly state that they have more stakeholders than shareholders and will act accordingly. Or experiments like the Long Term Stock Exchange, which (at the very least) try to offer an alternative for a company to be public, but not tied to quarterly reporting results.

All of these things matter, but I do think keeping the idea of time horizons in there matters as well. It’s one thing to say “maximize profits,” but any time you hear that you should ask “over what time frame.” Because a company can squeeze a ton of extra money in the short term in a way that guarantees to lessen the future prospects for the companies. That’s what happens in the enshittification process, and it really doesn’t need to be an inevitable law for all companies.

Source: How The Friedman Doctrine Leads To The Enshittification Of All Things | Techdirt

Dutch hacker obtained, sold virtually all Austrians’ (and Dutch and Colombian?) personal data

A Dutch hacker arrested in November obtained and offered for sale the full name, address and date of birth of virtually everyone in Austria, the Alpine nation’s police said on Wednesday.

A user believed to be the hacker offered the data for sale in an online forum in May 2020, presenting it as “the full name, gender, complete address and date of birth of presumably every citizen” in Austria, police said in a statement, adding that investigators had confirmed its authenticity.

The trove comprised close to nine million sets of data, police said. Austria’s population is roughly 9.1 million. The hacker had also put “similar data sets” from Italy, the Netherlands and Colombia up for sale, Austrian police said, adding that they did not have further details.

[…]

The police did not elaborate on the consequences for Austrians’ data security.

Source: Dutch hacker obtained virtually all Austrians’ personal data, police say | Reuters

An AI robot lawyer was set to argue in court. Scared lawyers shut it down with jail threats

A British man who planned to have a “robot lawyer” help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time.

Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence.

Here’s how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant’s ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci.

The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore.

As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.

“Multiple state bars have threatened us,” Browder said. “One even said a referral to the district attorney’s office and prosecution and prison time would be possible.”

In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.

“Even if it wouldn’t happen, the threat of criminal charges was enough to give it up,” he said. “The letters have become so frequent that we thought it was just a distraction and that we should move on.”

State bar organizations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law.

Browder refused to cite which state bar in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bars, including California’s.

[…]

“The truth is, most people can’t afford lawyers,” he said. “This could’ve shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could’ve helped them win cases.”

The future of robot lawyers faces uncertainty for another reason that is far simpler than the bar officials’ existential questions: courtroom rules.

Recording audio during a live legal proceeding is not permitted in federal court and is often prohibited in state courts. The AI tools developed by DoNotPay, which remain completely untested in actual courtrooms, require recording audio of arguments in order for the machine-learning algorithm to generate responses.

“I think calling the tool a ‘robot lawyer’ really riled a lot of lawyers up,” Browder said. “But I think they’re missing the forest for the trees. Technology is advancing and courtroom rules are very outdated.”

 

Source: An AI robot lawyer was set to argue in court. Real lawyers shut it down. : NPR

Lawyers protecting their own at the cost of the population? Who’d have thunk it?

Wearable Ultrasound Patch size of a stamp Images the Heart in Real-Time

A wearable ultrasound imager for the heart that is roughly the size of a postage stamp, can be worn for up to 24 hours, and works even during exercise may one day help doctors spot cardiac problems that current medical technology might miss, a new study finds.

Heart disease is the leading cause of death among the elderly, and is increasingly becoming a problem among those who are younger as well because of unhealthy diets and other factors. The signs of heart disease are often brief and unpredictable, so long-term cardiac imaging may help spot heart anomalies that might otherwise escape detection.

For instance, patients with heart failure may at times seem fine at rest, “as the heart sacrifices its efficiency to maintain the same cardiac output,” says study colead author Hongjie Hu, a nanoengineer at the University of California, San Diego. “Pushing the heart towards its limits during exercise can make the lack of efficiency become apparent.”

In addition, the heart can quickly recover from problems it may experience during exercise. This means doctors may fail to detect these issues, since cardiac imaging conventionally happens after exercise, not during it, Hu says.

[…]

Now scientists have developed a wearableultrasound device that can enable safe, continuous, real-time, long-term, and highly detailed imaging of the heart. They detailed their findings online on 25 January in the journal Nature.

[…]

The new device is a patch 1.9 centimeters long by 2.2 cm wide and only 0.9 millimeters thick. It uses an array of piezoelectric transducers to send and receive ultrasound waves in order to generate a constant stream of images of the structure and function of the heart. The researchers were able to get such images even during exercise on a stationary bike. No skin irritation or allergy was seen after 24 hours of continuous wear.

[…]

The new patch is about as flexible as human skin. It can also stretch up to 110 percent of its size, which means it can survive far more strain than typically experienced on human skin. These features help it stick onto the body, something not possible with the rigid equipment often used for cardiac imaging.

[…]

Traditional cardiac ultrasound imaging constantly rotates an ultrasound probe to analyze the heart in multiple dimensions. To eliminate the need for this rotation, the array of ultrasound sensors and emitters in the new device is shaped like a cross so that ultrasonic waves can travel at right angles to each other.

The scientists developed a custom deep-learning AI model that can analyze the data from the patch and automatically and continuously estimate vital details, such as the percentage of blood pumped out of the left ventricle with each beat, and the volume of blood the heart pumps out with each beat and every minute. The root of most heart problems is the heart not pumping enough blood, issues that often manifest only when the body is moving, the researchers note.

[…]

 

Source: Wearable Ultrasound Patch Images the Heart in Real-Time

Pet food retailer Zooplus hits out at Royal Canin’s ‘excessive’ price increases – and offers customers 10% off its competitors

[…]

Customers have been reporting steep price increases across a number of items from Royal Canin – with one saying her food had increased by £15 for a 10kg bag in less than a year.

Zooplus, an online pet food seller that stocks Royal Canin – among other brands – said it did not want to pass these price increases on to its customers, branding them “excessive”, and saying “value for money is important to us”.

The German retailer explained that people may find it difficult to buy Royal Canin products from its site and it has limited the number of items each household can purchase.

[…]

 

Source: Pet food retailer Zooplus hits out at Royal Canin’s ‘excessive’ price increases – and offers customers 10% off its competitors | UK News | Sky News

Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator

Meta’s (META.O) WhatsApp subsidiary was fined 5.5 million euros ($5.95 million) on Thursday by Ireland’s Data Privacy Commissioner (DPC), its lead EU privacy regulator, for an additional breach of the bloc’s privacy laws.

The DPC also told WhatsApp to reassess how it uses personal data for service improvements following a similar order it issued this month to Meta’s other main platforms, Facebook and Instagram, which stated Meta must reassess the legal basis upon which it targets advertising through the use of personal data.

[…]

Source: Meta’s WhatsApp fined 5.5 mln euro by lead EU privacy regulator | Reuters

Microsoft rummages through your PC to look at Office installs

Microsoft wants to know how many out-of-support copies of Office are installed on Windows PCs, and it intends to find out by pushing a patch through Microsoft Update that it swears is safe, not that you asked.

Quietly mentioned in a support post this week, update KB5021751 is targeting versions of Office “including” 2007 and 2010, both of which have been out of service for several years. Office 2013 is also being asked after as it’s due to lose support this coming April.

“This update will run one time silently without installing anything on the user’s device,” Microsoft said, followed by instructions on how to download and install the update, which Microsoft said has been scanned to ensure it’s not infected by malware.

[…]

Microsoft’s description of its out-of-support Office census update leaves much to the imagination, including whether the paragraph describing installation of the update, directly contradicting the paragraph above, is simply misplaced boilerplate language that doesn’t apply to KB5021751.

Also missing is any explanation of how the update will gather info on Office installations, whether it is collecting any other system information or what exactly will be transmitted and stored by Microsoft.

Because the nature of the update is unclear, it’s also unknown what may be left behind after it runs. Microsoft said that it is a single-run, silent process, but left off mention of traces of the update that may be left behind.

[…]

Source: Microsoft pushing update to count unsupported Office install • The Register

Stay out of MY PC!

Up to 925000 Norton LifeLock Accounts Targeted in credential stuffing attack

Thousands of people who use Norton password manager began receiving emailed notices this month alerting them that an unauthorized party may have gained access to their personal information along with the passwords they have stored in their vaults.

Gen Digital, Norton’s parent company, said the security incident was the result of a credential-stuffing attack rather than an actual breach of the company’s internal systems. Gen’s portfolio of cybersecurity services has a combined user base of 500 million users — of which about 925,000 active and inactive users, including approximately 8,000 password manager users may have been targeted in the attack, a Gen spokesperson told CNET via email.

[…]

Norton’s intrusion detection systems detected an unusual number of failed login attempts on Dec. 12, the company said in its notice. On further investigation, around Dec. 22, Norton was able to determine that the attack began around Dec. 1.

“Norton promptly notified both regulators and customers as soon as the team was able to confirm that data was accessed in the attack,” Gen’s spokesperson said.

Personal data that may have been compromised includes Norton users’ full names, phone numbers and mailing addresses. Norton also said it “cannot rule out” that password manager vault data including users’ usernames and passwords were compromised in the attack.

“Systems have not been compromised, and they are safe and operational, but as is all too commonplace in today’s world, bad actors may take credentials found elsewhere, like the Dark Web, and create automated attacks to gain access to other unrelated accounts,”

[…]

Source: Norton LifeLock Accounts Targeted: What to Know and How to Protect Your Passwords – CNET

Google Accused of Creating Digital Ad Monopoly in New Justice Dept. Suit

The Department of Justice filed a lawsuit against Google Tuesday, accusing the tech giant of using its market power to create a monopoly in the digital advertising business over the course of 15 years.

Google “corrupted legitimate competition in the ad tech industry by engaging in a systematic campaign to seize control of the wide swath of high-tech tools used by publishers, advertisers and brokers, to facilitate digital advertising,” the Justice Department alleges. Eight state attorneys general joined in the suit, filed in Virginia federal court. Google has faced five antitrust suits since 2020.

[…]

Source: Google Accused of Digital Ad Monopoly in New Justice Dept. Suit

Perfectly Good MacBooks From 2020 Are Being Sold for Scrap Because of Activation Lock

Secondhand MacBooks that retailed for as much as $3,000 are being turned into parts because recyclers have no way to login and factory reset the machines, which are often just a couple years old.

“How many of you out there would like a 2-year-old M1 MacBook? Well, too bad, because your local recycler just took out all the Activation Locked logic boards and ground them into carcinogenic dust,” John Bumstead, a MacBook refurbisher and owner of the RDKL INC repair store, said in a recent tweet.

The problem is Apple’s T2 security chip. First introduced in 2018, the laptop makes it impossible for anyone who isn’t the original owner to log into the machine. It’s a boon for security and privacy and a plague on the second hard market. “Like it has been for years with recyclers and millions of iPhones and iPads, it’s pretty much game over with MacBooks now—there’s just nothing to do about it if a device is locked,” Bumstead told Motherboard. “Even the jailbreakers/bypassers don’t have a solution, and they probably won’t because Apple proprietary chips are so relatively formidable.” When Apple released its own silicon with the M1, it integrated the features of the T2 into those computers.

[…]

Bumstead told Motherboard that every year Apple makes life a little harder for the second hand market. “The progression has been, first you had certifications with unrealistic data destruction requirements, and that caused recyclers to pull drives from machines and sell without drives, but then as of 2016 the drives were embedded in the boards, so they started pulling boards instead,” he said. “And now the boards are locked, so they are essentially worthless. You can’t even boot locked 2018+ MacBooks to an external device because by default the MacBook security app disables external booting.”

Motherboard first reported on this problem in 2020, but Bumstead said it’s gotten worse recently. “Now we’re seeing quantity come through because companies with internal 3-year product cycles are starting to dump their 2018/2019s, and inevitably a lot of those are locked,” he said.

[…]

Bumstead offered some solutions to the problem. “When we come upon a locked machine that was legally acquired, we should be able to log into our Apple account, enter the serial and any given information, then click a button and submit the machine to Apple for unlocking,” he said. “Then Apple could explore its records, query the original owner if it wants, but then at the end of the day if there are no red flags and the original owner does not protest within 30 days, the device should be auto-unlocked.”

[…]

Source: Perfectly Good MacBooks From 2020 Are Being Sold for Scrap Because of Activation Lock

Indian Android Users Can Finally Use Alternate Search and Payment Methods and forked Google apps

Android users in India will soon have more control over their devices, thanks to a court ruling. Beginning next month, Indian Android wielders can choose a different billing system when paying for apps and in-app smartphone purchases rather than default to going through the Play Store. Google will also allow Indian users to select a different search engine as their default right as they set up a new device, which might have implications for upcoming EU regulations.

The move comes after a ruling last week by India’s Supreme Court. The trial started late last year when the Competition Commission of India (CCI) fined Google $161 million for imposing restrictions on its manufacturing partners. Google attempted to challenge the order by maintaining this kind of practice would stall the Android ecosystem and that “no other jurisdiction has ever asked for such far-reaching changes.”

[…]

Google also won’t be able to require the installation of its branded apps to grant the license for running Android OS anymore. From now on, device manufacturers in India will be able to license “individual Google apps” as they like for pre-installation rather than needing to bundle the whole kit and caboodle. Google is also updating the Android compatibility requirements for its OEM partners to “build non-compatible or forked variants.”

[…]

Of particular note is seeing how users will react to being able to choose whether to buy apps and other in-app purchases through the Play Store, where Google takes a 30% cut from each transaction, or through an alternative billing service like JIO Money or Paytm—or even Amazon Pay, available in India.

[…]

The Department of Justice in the United States is also suing Google’s parent company, Alphabet, for a second time this week for practices within its digital advertising business, alleging that the company “corrupted legitimate competition in the ad tech industry” to build out its monopoly.

Source: Indian Android Users Can Finally Use Alternate Search and Payment Methods

Airline owned through open Jenkins and hardcoded AWS – TSA NoFly List found and exposed

how to completely own an airline in 3 easy steps

and grab the TSA nofly list along the way

note: this is a slightly more technical* and comedic write up of the story covered by my friends over at dailydot, which you can read here

*i say slightly since there isnt a whole lot of complicated technical stuff going on here in the first place

step 1: boredom

like so many other of my hacks this story starts with me being bored and browsing shodan (or well, technically zoomeye, chinese shodan), looking for exposed jenkins servers that may contain some interesting goods. at this point i’ve probably clicked through about 20 boring exposed servers with very little of any interest, when i suddenly start seeing some familar words. “ACARS“, lots of mentions of “crew” and so on. lots of words i’ve heard before, most likely while binge watching Mentour Pilot YouTube videos. jackpot. an exposed jenkins server belonging to CommuteAir.

zoomeye search for x-jenkins

step 2: how much access do we have really?

ok but let’s not get too excited too quickly. just because we have found a funky jenkins server doesn’t mean we’ll have access to much more than build logs. it quickly turns out that while we don’t have anonymous admin access (yes that’s quite frequently the case [god i love jenkins]), we do have access to build workspaces. this means we get to see the repositories that were built for each one of the ~70 build jobs.

step 3: let’s dig in

most of the projects here seem to be fairly small spring boot projects. the standardized project layout and extensive use of the resources directory for configuration files will be very useful in this whole endeavour.

the very first project i decide to look at in more detail is something about “ACARS incoming”, since ive heard the term acars before, and it sounds spicy. a quick look at the resource directory reveals a file called application-prod.properties (same also for -dev and -uat). it couldn’t just be that easy now, could it?

well, it sure is! two minutes after finding said file im staring at filezilla connected to a navtech sftp server filled with incoming and outgoing ACARS messages. this aviation shit really do get serious.

a photo of a screen showing filezilla navigated to a folder called ForNavtech/ACARS_IN full of acars messages, the image is captioned like a meme with "this aviation shit get serious"

here is a sample of a departure ACARS message:

screenshot of a terminal showing what an ACARS RCV file shows like

from here on i started trying to find journalists interested in a probably pretty broad breach of US aviation. which unfortunately got peoples hopes up in thinking i was behind the TSA problems and groundings a day earlier, but unfortunately im not quite that cool. so while i was waiting for someone to respond to my call for journalists i just kept digging, and oh the things i found.

as i kept looking at more and more config files in more and more of the projects, it dawned on me just how heavily i had already owned them within just half an hour or so. hardcoded credentials there would allow me access to navblue apis for refueling, cancelling and updating flights, swapping out crew members and so on (assuming i was willing to ever interact with a SOAP api in my life which i sure as hell am not).

i however kept looking back at the two projects named noflycomparison and noflycomparisonv2, which seemingly take the TSA nofly list and check if any of commuteair’s crew members have ended up there. there are hardcoded credentials and s3 bucket names, however i just cant find the actual list itself anywhere. probably partially because it seemingly always gets deleted immediately after processing it, most likely specifically because of nosy kittens like me.

heavily redacted example of a config file from one of the repositories

fast forward a few hours and im now talking to Mikael Thalen, a staff writer at dailydot. i give him a quick rundown of what i have found so far and how in the meantime, just half an hour before we started talking, i have ended up finding AWS credentials. i now seemingly have access to pretty much their entire aws infrastructure via aws-cli. numerous s3 buckets, dozens of dynamodb tables, as well as various servers and much more. commute really loves aws.

two terminal screenshots composed together showing some examples of aws buckets and dynamodb tables

i also share with him how close we seemingly are to actually finding the TSA nofly list, which would obviously immediately make this an even bigger story than if it were “only” a super trivially ownable airline. i had even peeked at the nofly s3 bucket at this point which was seemingly empty. so we took one last look at the noflycomparison repositories to see if there is anything in there, and for the first time actually take a peek at the test data in the repository. and there it is. three csv files, employee_information.csv, NOFLY.CSV and SELECTEE.CSV. all commited to the repository in july 2022. the nofly csv is almost 80mb in size and contains over 1.56 million rows of data. this HAS to be the real deal (we later get confirmation that it is indeed a copy of the nofly list from 2019).

holy shit, we actually have the nofly list. holy fucking bingle. what?! :3

me holding a sprigatito pokemon plushie in front of a laptop screen showing a very blurry long csv list in vscode

with the jackpot found and being looked into by my journalism friends i decided to dig a little further into aws. grabbing sample documents from various s3 buckets, going through flight plans and dumping some dynamodb tables. at this point i had found pretty much all PII imaginable for each of their crew members. full names, addresses, phone numbers, passport numbers, pilot’s license numbers, when their next linecheck is due and much more. i had trip sheets for every flight, the potential to access every flight plan ever, a whole bunch of image attachments to bookings for reimbursement flights containing yet again more PII, airplane maintenance data, you name it.

i had owned them completely in less than a day, with pretty much no skill required besides the patience to sift through hundreds of shodan/zoomeye results.

so what happens next with the nofly data

while the nature of this information is sensitive, i believe it is in the public interest for this list to be made available to journalists and human rights organizations. if you are a journalist, researcher, or other party with legitimate interest, please reach out at nofly@crimew.gay. i will only give this data to parties that i believe will do the right thing with it.

note: if you email me there and i do not reply within a regular timeframe it is very likely my reply ended up in your spam folder or got lost. using email not hosted by google or msft is hell. feel free to dm me on twitter in that case.

support me

if you liked this or any of my other security research feel free to support me on my ko-fi. i am unemployed and in a rather precarious financial situation and do this research for free and for the fun of it, so anything goes a long way.

Source: how to completely own an airline in 3 easy steps

US law enforcement has warrantless access to many money transfers

Your international money transfers might not be as discreet as you think. Senator Ron Wyden and The Wall Street Journal have learned that US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

The concern, of course, is that officials can obtain sensitive transaction details without court oversight or customers’ knowledge. An unscrupulous officer could secretly track large transfers. Wyden adds that the people in the database are more likely to be immigrants, minorities and low-income residents who don’t have bank accounts and already have fewer privacy protectoins. The American Civil Liberties Union also asserts that the subpoenas used to obtain this data violate federal law. Arizona issued at least 140 of these subpoenas between 2014 and 2021.

[…]

Source: US law enforcement has warrantless access to many money transfers | Engadget

Locust antenna / AI driven cyborg can identify scents

[…]In a study published Monday in the journal Biosensor and Bioelectronics, a group of researchers from Tel Aviv University (via Neuroscience News) said they recently created a robot that can identify a handful of smells with 10,000 times more sensitivity than some specialized electronics. They describe their robot as a bio-hybrid platform (read: cyborg). It features a set of antennae taken from a desert locust that is connected to an electronic system that measures the amount of electrical signal produced by the antennae when they detect a smell. They paired the robot with an algorithm that learned to characterize the smells by their signal output. In this way, the team created a system that could reliably differentiate between eight “pure” odors, including geranium, lemon and marzipan, and two mixtures of different smells. The scientists say their robot could one day be used to detect drugs and explosives.

A YouTube video from Tel Aviv University claims the robot is a “scientific first,” but last June researchers from Michigan State University published research detailing a system that used surgically-altered locusts to detect cancer cells. Back in 2016, scientists also tried turning locusts into bomb-sniffing cyborgs. What can I say, after millennia of causing crop failures, the pests could finally be useful for something.

Source: Virgin Orbit clarifies the cause behind its ‘Start Me Up’ mission’s failure to reach orbit

Coating to protect spacecraft from heat and harvest sun energy developed

[…]The research team working with Airbus at the University of Surrey’s Advanced Technology Institute claims its nano-coating, referred to as a Multifunctional Nanobarrier Structure (MFNS), can be applied to the surfaces of equipment, including antennas, and it has been shown to be able to reduce the operating temperature of such surfaces from 120°C to 60°C (248°F to 140°F).

In its study published online, the team explains that thermal control is essential for most spaceborne equipment as heating from sunlight can cause large temperature differences across satellites that would result in mechanical stresses and possible misalignment of scientific instruments such as optical components. Paradoxically, space systems also require heat pipes to ensure minimal heating so that payloads can withstand the coldest space conditions.

[…]

The solution the team developed is a multilayer protection nanobarrier, which it says is comprised of a buffer layer made of poly(p-xylylene) and a diamond-like carbon superlattice layer that gives it a mechanically and environmentally ultra-stable platform.

The MFNS is deposited onto surfaces using a custom plasma-enhanced chemical vapor deposition (PECVD) system, which operates at room temperature and so can be applied to heat-sensitive substrates.

The combined layer is a dielectric and therefore electromagnetically transparent across a wide range of radio frequencies, the study states, allowing it to be used to coat antenna structures without adding “significant interference” to the signal.

[…]

According to the team, the MFNS can be modulated to provide adjustable solar absorptivity in the ultraviolet to visible part of the spectrum, while at the same time exhibiting high and stable infrared emissivity. This is achieved by controlling the optical gap of individual layers.

This extends to self-reconfiguration in orbit, if the report can be believed, by means of balancing the UV and atomic oxygen (AO) exposure of the MFNS coating. AO is created from molecular oxygen in the upper atmosphere by UV radiation, forming AO radicals commonly found in low Earth orbit, the research adds.

As to the harvesting of heat energy, this can be achieved through the creation of highly absorbing structures with a photothermal conversion efficiency as high as 96.66 percent, according to the team. This is aided by the deposition of a nitrogen-doped DLC superlattice layer in the coating which gives rise to enhanced optical absorption across a wide spectral range.

These enhanced properties, along with advanced manufacturing methods, demonstrate that the MFNS can be a candidate for many thermal applications such as photodetectors, emitters, smart radiators, and energy harvesting used in satellite systems and beyond, the study states.

[…]

Source: Can ‘space skin’ help future satellites harvest energy? • The Register

Researchers Breed Naturally Flame-Resistant Cotton

Chemical flame retardants can make us safer by preventing or slowing fires, but they’re linked to a range of unsettling health effects. To get around that concern, researchers with the U.S. Department of Agriculture have bred a new population of cotton that can self-extinguish after encountering a flame.

The team of scientists from the USDA’s Agricultural Research Service, led by Gregory N. Thyssen, bred 10 strains of cotton using alleles from 10 different parent cultivars. After creating fabrics with each of these strains, the researchers put them through burn tests and found that four of them were able to completely self-extinguish. Their work is published today in PLOS One.

[…]

These flame retardant cultivars could be a game-changer in the textile industry. Currently, efforts to make fabric flame retardant include applying chemicals that reduce a material’s ability to ignite; flame retardant chemicals have been added to many fabrics since at least the 1970s. While some have been pulled from the market, these chemicals don’t break down easily, and they can bioaccumulate in humans and animals, potentially leading to endocrine disruption, reproductive toxicity, and cancer. These new strains of cotton could be used to manufacture fabrics and products that have flame retardancy naturally baked in.

Source: Researchers Breed Naturally Flame-Resistant Cotton

Revealed: more than 90% of rainforest carbon offsets by biggest provider Verra are worthless, may be damaging, analysis shows

The forest carbon offsets approved by the world’s leading provider and used by Disney, Shell, Gucci and other big corporations are largely worthless and could make global heating worse, according to a new investigation.

The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.

The analysis raises questions over the credits bought by a number of internationally renowned companies – some of them have labelled their products “carbon neutral”, or have told their consumers they can fly, buy new clothes or eat certain foods without making the climate crisis worse.

But doubts have been raised repeatedly over whether they are really effective.

The nine-month investigation has been undertaken by the Guardian, the German weekly Die Zeit and SourceMaterial, a non-profit investigative journalism organisation. It is based on new analysis of scientific studies of Verra’s rainforest schemes.

[…]

Verra argues that the conclusions reached by the studies are incorrect, and questions their methodology. And they point out that their work since 2009 has allowed billions of dollars to be channelled to the vital work of preserving forests.

The investigation found that:

  • Only a handful of Verra’s rainforest projects showed evidence of deforestation reductions, according to two studies, with further analysis indicating that 94% of the credits had no benefit to the climate.
  • The threat to forests had been overstated by about 400% on average for Verra projects, according to analysis of a 2022 University of Cambridge study.
  • Gucci, Salesforce, BHP, Shell, easyJet, Leon and the band Pearl Jam were among dozens of companies and organisations that have bought rainforest offsets approved by Verra for environmental claims.
  • Human rights issues are a serious concern in at least one of the offsetting projects. The Guardian visited a flagship project in Peru, and was shown videos that residents said showed their homes being cut down with chainsaws and ropes by park guards and police. They spoke of forced evictions and tensions with park authorities.

[…]

Two different groups of scientists – one internationally based, the other from Cambridge in the UK – looked at a total of about two-thirds of 87 Verra-approved active projects. A number were left out by the researchers when they felt there was not enough information available to fairly assess them.

The two studies from the international group of researchers found just eight out of 29 Verra-approved projects where further analysis was possible showed evidence of meaningful deforestation reductions.

The journalists were able to do further analysis on those projects, comparing the estimates made by the offsetting projects with the results obtained by the scientists. The analysis indicated about 94% of the credits the projects produced should not have been approved.

Credits from 21 projects had no climate benefit, seven had between 98% and 52% fewer than claimed using Verra’s system, and one had 80% more impact, the investigation found.

Separately, the study by the University of Cambridge team of 40 Verra projects found that while a number had stopped some deforestation, the areas were extremely small. Just four projects were responsible for three-quarters of the total forest that was protected.

The journalists again analysed these results more closely and found that, in 32 projects where it was possible to compare Verra’s claims with the study finding, baseline scenarios of forest loss appeared to be overstated by about 400%. Three projects in Madagascar have achieved excellent results and have a significant impact on the figures. If those projects are not included, the average inflation is about 950%.

[…]

Barbara Haya, the director of the Berkeley Carbon Trading Project, has been researching carbon credits for 20 years, hoping to find a way to make the system function. She said: “The implications of this analysis are huge. Companies are using credits to make claims of reducing emissions when most of these credits don’t represent emissions reductions at all.

“Rainforest protection credits are the most common type on the market at the moment. And it’s exploding, so these findings really matter. But these problems are not just limited to this credit type. These problems exist with nearly every kind of credit.

“One strategy to improve the market is to show what the problems are and really force the registries to tighten up their rules so that the market could be trusted. But I’m starting to give up on that. I started studying carbon offsets 20 years ago studying problems with protocols and programs. Here I am, 20 years later having the same conversation. We need an alternative process. The offset market is broken.”

Source: Revealed: more than 90% of rainforest carbon offsets by biggest provider are worthless, analysis shows | Carbon offsetting | The Guardian

DARPA’s New X-Plane Aims To Maneuver With Nothing But Bursts Of Air

The Defense Advanced Research Projects Agency has moved into the next phase of its Control of Revolutionary Aircraft with Novel Effectors program, or CRANE. The project is centered on an experimental uncrewed aircraft, which Aurora Flight Sciences is developing, that does not have traditional moving surfaces to control the aircraft in flight.

Aurora Flight Sciences’ CRANE design, which does not yet have an official X-plane designation or nickname, instead uses an active flow control (AFC) system to maneuver the aircraft using bursts of highly pressurized air. This technology could eventually find its way onto other military and civilian designs. It could have particularly significant implications when applied to future stealth aircraft.

A subscale wind tunnel model of Aurora Flight Sciences’ CRANE X-plane design. Aurora Flight Sciences

The Defense Advanced Research Projects Agency (DARPA) issued a press release regarding the last developments in the CRANE program yesterday. Aurora Flight Sciences, a subsidiary of Boeing, announced it had received a Phase 2 contract to continue work on this project back on December 12, 2022.

[…]

The design that Aurora ultimately settled on was more along the lines of a conventional plane. However, it has a so-called Co-Planar Joined Wing (CJW) planform consisting of two sets of wings attached to a single center fuselage that merge together at the tips, along with a twin vertical tail arrangement. As currently designed, the drone will use “banks” of nozzles installed at various points on the wings to maneuver in the air.

A wind tunnel model of one of Aurora Flight Sciences’ initial CRANE concepts with a joined wing. Aurora Flight Sciences

A wind tunnel model showing a more recent evolution of Aurora Flight Sciences’ CRANE X-plane design. Aurora Flight Sciences

The aircraft’s main engine arrangement is not entirely clear. An chin air intake under the forward fuselage together with a single exhaust nozzle at the rear seen in official concept art and on wind tunnel models would seem to point to a plan to power the aircraft with a single jet engine.

[…]

Interestingly, Aurora’s design “is configured to be a modular testbed featuring replaceable outboard wings and swappable AFC effectors. The modular design allows for testing of not only Aurora’s AFC effectors but also AFC effectors of various other designs,” a company press release issued in December 2022 said. “By expanding testing capabilities beyond Aurora-designed components, the program further advances its goal to provide the confidence needed for future aircraft requirements, both military and commercial, to include AFC-enabled capabilities.”

Aurora has already done significant wind tunnel testing of subscale models with representative AFC components as part of CRANE’s Phase 1. The company, along with Lockheed Martin, was chosen to proceed to that phase of the program in 2021.

“Using a 25% scale model, Aurora conducted tests over four weeks at a wind tunnel facility in San Diego, California. In addition to 11 movable conventional control surfaces, the model featured 14 AFC banks with eight fully independent controllable AFC air supply channels,” according to a press release the company put out in May 2022.

[…]

Getting rid of traditional control surfaces inherently allows for a design to be more aerodynamic, and therefore fly in a more efficient manner, especially at higher altitudes. An aircraft with an AFC system doesn’t need the various actuators and other components to move things like ailerons and rudders, offering new ways to reduce weight and bulk.

A DARPA briefing slide showing how the designs of traditional control surfaces, at their core, have remained largely unchanged after more than a century of other aviation technology developments. DARPA

A lighter and more streamlined aircraft design using an AFC system might be capable of greater maneuverability. This could be particularly true for uncrewed types that also do not have to worry about the physical limitations of a pilot.

The elimination of so many moving parts also means fewer things that can break, improving safety and reliability. This would do away with various maintenance and logistics requirements, too. It might make a military design more resilient to battle damage and easier to fix, as well.

[…]

The CRANE program and Aurora Flight Sciences’ design is of course not the first time AFC technology has been experimented with. U.K.-headquartered BAE Systems, which was another one of the participants in CRANE’s Phase 0, has been very publicly experimenting with various AFC concepts since at least 2010. The most recent of these developments was an AFC-equipped design called MAGMA. Described by BAE as a “large model,” this aircraft actually flew and you can read more about it here.

“Over the past several decades, the active flow control community has made significant advancements that enable the integration of active flow control technologies into advanced aircraft,” Richard Wlezein, the CRANE Program Manager at DARPA, said in a statement included in today’s press release. “We are confident about completing the design and flight test of a demonstration aircraft with AFC as the primary design consideration.”

[…]

Source: DARPA’s New X-Plane Aims To Maneuver With Nothing But Bursts Of Air

This man used AI to write and illustrate a children’s book in one weekend. He wasn’t prepared for the backlash.

  • Ammaar Reshi wrote and illustrated a children’s book in 72 hours using ChatGPT and Midjourney.
  • The book went viral on Twitter after it was met with intense backlash from artists.
  • Reshi said he respected the artists’ concerns but felt some of the anger was misdirected.

Ammaar Reshi was reading a bedtime story to his friend’s daughter when he decided he wanted to write his own.

Reshi, a product-design manager at a financial-tech company based in San Francisco, told Insider he had little experience in illustration or creative writing, so he turned to AI tools.

In December he used OpenAI’s new chatbot, ChatGPT, to write “Alice and Sparkle,” a story about a girl named Alice who wants to learn about the world of tech, and her robot friend, Sparkle. He then used Midjourney, an AI art generator, to illustrate it.

Just 72 hours later, Reshi self-published his book on Amazon’s digital bookstore. The following day, he had the paperback in his hands, made for free via another Amazon service called KDP.

Front page of Alice and Sparkle, by Ammaar Reshi. An AI generated children's book.
“Alice and Sparkle” was meant to be a gift for his friends’ kids.Ammaar Reshi

He said he paid nothing to create and publish the book, though he was already paying for a $30-a-month Midjourney subscription.

Impressed with the speed and results of his project, Reshi shared the experience in a Twitter thread that attracted more than 2,000 comments and 5,800 retweets.

Reshi said he initially received positive feedback from users praising his creativity. But the next day, the responses were filled with vitriol.

“There was this incredibly passionate response,” Reshi said. “At 4 a.m. I was getting woken up by my phone blowing up every two minutes with a new tweet saying things like, ‘You’re scum’ and ‘We hate you.'”

Reshi said he was shocked by the intensity of the responses for what was supposed to be a gift for the children of some friends. It was only when he started reading through them that he discovered he had landed himself in the middle of a much larger debate.

Artists accused him of theft

Reshi’s book touched a nerve with some artists who argue that AI art generators are stealing their work.

Some artists claim their art has been used to train AI image generators like Midjourney without their permission. Users can enter artists’ names as prompts to generate art in their style.

An update to Lensa AI, a photo-editing tool, went viral on social-media last year after it launched an update that used AI to transform users’ selfies into works of art, leading artists to highlight their concerns about AI programs taking inspiration from their work without permission or payment.

“I had not read up on the issues,” Reshi said. “I realized that Lensa had actually caused this whole thing with that being a very mainstream app. It had spread that debate, and I was just getting a ton of hate for it.”

“I was just shocked, and honestly I didn’t really know how to deal with it,” he said.

Among the nasty messages, Reshi said he found people with reasonable and valid concerns.

“Those are the people I wanted to engage with,” he said. “I wanted a different perspective. I think it’s very easy to be caught up in your bubble in San Francisco and Silicon Valley, where you think this is making leaps, but I wanted to hear from people who thought otherwise.”

After learning more, he added to his Twitter thread saying that artists should be involved in the creation of AI image generators and that their “talent, skill, hard work to get there needs to be respected.”

He said he thinks some of the hate was misdirected at his one-off project, when Midjourney allows users to “generate as much art as they want.”

Reshi’s book was briefly removed from Amazon — he said Amazon paused its sales from January 6 to January 14, citing “suspicious review activity,” which he attributed to the volume of both five- and one-star reviews. He had sold 841 copies before it was removed.

Midjourney’s founder, David Holz, told Insider: “Very few images made on our service are used commercially. It’s almost entirely for personal use.”

He said that data for all AI systems are “sourced from broadly spidering the internet,” and most of the data in Midjourney’s model are “just photos.”

A creative process

Reshi said the project was never about claiming authorship over the book.

“I wouldn’t even call myself the author,” he said. “The AI is essentially the ghostwriter, and the other AI is the illustrator.”

But he did think the process was a creative one. He said he spent hours tweaking the prompts in Midjourney to try and achieve consistent illustrations.

Despite successfully creating an image of his heroine, Alice, to appear throughout the book, he wasn’t able to do the same for her robot friend. He had to use a picture of a different robot each time it appeared.

“It was impossible to get Sparkle the robot to look the same,” he said. “It got to a point where I had to include a line in the book that says Sparkle can turn into all kinds of robot shapes.”

A photo of a page of Alice and Sparkle, by Ammaar Reshi. An AI generated children's book.
Reshi’s children’s book stirred up anger on Twitter.Ammaar Reshi

Some people also attacked the quality of the book’s writing and illustrations.

“The writing is stiff and has no voice whatsoever,” one Amazon reviewer said. “And the art — wow — so bad it hurts. Tangents all over the place, strange fingers on every page, and inconsistencies to the point where it feels like these images are barely a step above random.”

Reshi said he would be hesitant to put out an illustrated book again, but he would like to try other projects with AI.

“I’d use ChatGPT for instance,” he said, saying there seem to be fewer concerns around content ownership than with AI image generators.

The goal of the project was always to gift the book to the two children of his friends, who both liked it, Reshi added.

“It worked with the people I intended, which was great,” he said.

Read the original article on Business Insider

Source: This man used AI to write and illustrate a children’s book in one weekend. He wasn’t prepared for the backlash.

High-powered lasers can be used to steer lightning strikes

[…]

European researchers have successfully tested a system that uses terawatt-level laser pulses to steer lighting toward a 26-foot rod. It’s not limited by its physical height, and can cover much wider areas — in this case, 590 feet — while penetrating clouds and fog.

The design ionizes nitrogen and oxygen molecules, releasing electrons and creating a plasma that conducts electricity. As the laser fires at a very quick 1,000 pulses per second, it’s considerably more likely to intercept lightning as it forms. In the test, conducted between June and September 2021, lightning followed the beam for nearly 197 feet before hitting the rod.

[…]

The University of Glasgow’s Matteo Clerici, who didn’t work on the project, noted to The Journal that the laser in the experiment costs about $2.17 billion dollars. The discoverers also plan to significantly extend the range, to the point where a 33-foot rod would have an effective coverage of 1,640 feet.

[…]

Source: High-powered lasers can be used to steer lightning strikes | Engadget