About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Spy Tech Palantir’s Covid-era UK health contract extended without public consultation or competition

NHS England has extended its contract with US spy-tech biz Palantir for the system built at the height of the pandemic to give it time to resolve the twice-delayed procurement of a data platform to support health service reorganization and tackle the massive care backlog.

The contract has already been subject to the threat of a judicial review, after which NHS England – a non-departmental government body – agreed to three concessions, including the promise of public consultation before extending the contract.

Campaigners and legal groups are set to mount legal challenges around separate, but related, NHS dealing with Palantir.

In a notice published yesterday, the NHS England said the contract would be extended until September 2023 in a deal worth £11.5 million ($13.8 million).

NHS England has been conducting a £360 million ($435 million) procurement of a separate, but linked, Federated Data Platform (FDP), a deal said to be a “must-win” for Palantir, a US data management company which cut its teeth working for the CIA and controversial US immigration agency ICE.

The contract notice for FDP, which kicks off the official competition, was originally expected in June 2022 but was delayed until September 2022, when NHS England told The Register it would be published. The notice has yet to appear

[…]

Source: Palantir’s Covid-era UK health contract extended • The Register

LG allows you to choose picture mode by comparing pictures

An image showing LG’s new personalized picture wizard software feature on its 2023 TVs.

Setting up a new TV? Ask any videophile or home theater nerd and they’ll probably tell you to set your picture mode to the movie/cinema option (or whatever’s closest on your particular TV) and leave it there. Traditionally, this has been the most color accurate option and leans toward a pleasant, warm white balance instead of the cooler temperature that usually accompanies “standard” modes. But there are inevitably those people who prefer the standard or vivid settings — much to the chagrin of enthusiasts.

With its new 2023 TV lineup, LG is throwing these conventional choices out the window — if you’re willing to try — and has come up with a new way of personalizing your picture preferences. Instead of giving you a few labeled options to switch between, a new “Personalized Picture Wizard” will present you with a series of images. On each screen, you choose one or two that look best to you.

A photo showing the process of LG’s personalized picture wizard TV software feature.
Of course AI deep learning is involved. It’s 2023.
Photo by Chris Welch / The Verge

After you do this six times, the TV will formulate a preset that’s based on your selections. It considers the brightness, color, and contrast levels that you indicated a preference for. LG says a ton of AI deep learning is involved throughout this process; it sampled millions of images in creating the Picture Wizard. If you’re ready to see how your picture mode looks while watching real content, you can hit “apply.”

Obviously LG will still be offering the tried and true picture settings along with deeper calibration options; your personalized picture mode will appear right alongside those in the settings menu on 2023 LG TVs. So you can easily switch between all of them and see the differences. For now, you can only create one personalized picture mode that applies to everyone using the same TV, but LG told me that it eventually wants to let each user profile make their own.

[…]

Source: LG wants to reinvent how you think of TV picture modes – The Verge

Apple Faces French $8.5M Fine For Illegal Data Harvesting

France’s data protection authority, CNIL, fined Apple €8 million (about $8.5 million) Wednesday for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

[…]

The French fine, though, is the latest addition to a growing body of evidence that Apple may not be the privacy guardian angel it makes itself out to be.

[…]

Apple failed to “obtain the consent of French iPhone users (iOS 14.6 version) before depositing and/or writing identifiers used for advertising purposes on their terminals,” the CNIL said in a statement. The CNIL’s fine calls out the search ads in Apple’s App Store, specifically. A French court fined the company over $1 million in December over its commercial practices related to the App Store.

[…]

Eight million euros is peanuts for a company that makes billions a year on advertising alone and is so inconceivably wealthy that it had enough money to lose $1 trillion in market value last year—making Apple the second company in history to do so. The fine could have been higher but for the fact that Apple’s European headquarters are in Ireland, not France, giving the CNIL a smaller target to go after.

Still, its a signal that Apple may face a less friendly regulatory future in Europe. Commercial authorities are investigating Apple for anti-competitive business practices, and are even forcing the company to abandon its proprietary charging cable in favor of USB-C ports.

Source: Apple Faces Rare $8.5M Fine For Illegal Data Harvesting

Asus brings glasses-free 3D to OLED laptops | Ars Technica

Asus announced an upcoming feature that allows users to view and work with content in 3D without wearing 3D glasses. Similar technology has been used in a small number of laptops and displays before, but Asus is incorporating the feature for the first time in OLED laptop screens. Combined with high refresh rates, unique input methods like an integrated dial, and the latest CPUs and laptop GPUs, the company is touting the laptops with the Asus Spatial Vision feature as powerful niche options for creative professionals looking for new ways to work.

Asus’ Spatial Vision 3D tech is debuting on two laptops in Q2 this year: the ProArt Studiobook 16 3D OLED (H7604) and Vivobook Pro 16 3D OLED (K6604).

Asus' ProArt Studiobook 16 3D OLED (H7604) is one of the two PCs announced with Asus Spatial Vision.
Asus’ ProArt Studiobook 16 3D OLED (H7604) is one of the two PCs announced with Asus Spatial Vision.

The laptops each feature a 16-inch, 3200×2000 OLED panel with a 120 Hz refresh rate. The OLED panel is topped with a layer of optical resin, a glass panel, and a lenticular lens layer. The lenticular lens works with a pair of eye-tracking cameras to render real-time images for each eye that adjust with your physical movements.

In a press briefing, an Asus spokesperson said that because the OLED screens claim a low gray-to-gray response time of 0.2 ms, as well as the extremely high contrast that comes with OLED, there’s no crosstalk between the left and right eye’s image, ensuring more realistic-looking content. However, Asus’ product pages for the laptops acknowledge that  experiences may vary, and some may still suffer from “dizziness or crosstalk due to other reasons, and this varies according to the individual.” Asus said it’s aiming to offer demos to users, which would be worth trying out before committing to this unique feature.

The ProArt Studiobook 16 3D OLED weighs 5.29 lbs and is 0.94-inches thick.
The ProArt Studiobook 16 3D OLED weighs 5.29 lbs and is 0.94-inches thick.

On top of the lenticular lens is a 2D/3D liquid-crystal switching layer, which is topped with a glass front panel with an anti-reflective coating. According to Asus, it’ll be easy to switch from 2D mode to 3D and back again. When the laptops aren’t in 3D mode, their display will appear as a highly specced OLED screen, Asus claimed.

The laptops can apply a 3D effect to any game, movie, or content that supports 3D. However, content not designed for 3D display may appear more “stuttery,” per a demo The Verge saw. The laptops are primarily for people working with and creating 3D models and content, such as designers and architects.

The Vivobook Pro 16X 3D OLED weighs 4.41 lbs and is 0.9-inches thick.
Enlarge / The Vivobook Pro 16X 3D OLED weighs 4.41 lbs and is 0.9-inches thick.

The two laptops will ship with Spatial Vision Hub software. It includes a Model Viewer, Player for movies and videos, Photo Viewer for transforming side-by-side photos shot with a 180-degree camera into one stereoscopic 3D image, and Connector, a plug-in that Asus’ product page says is compatible with “various apps and tools, so you can easily view any project in 3D.”

Asus’ Spatial Vision laptops have glasses-free 3D that’s similar to some Acer products already released. In May, Acer announced the SpatialLabs View and SpatialLabs View Pro portable monitors that can convert 2D content into stereoscopic 3D by rendering images for the left and right eye and projecting them through an optical lens. The monitors require an Intel Core i7 CPU and RTX 3070 Ti for laptops or RTX 2080 for desktops, however. Asus’ laptops give you everything you need to try the emerging technology.

Acer has also released its laptops with glasses-free 3D: the ConceptD SpatialLabs Edition workstation-esque clamshell and the Acer Predator Helios 300 Spatial Edition gaming laptop.

[…]

Source: Asus brings glasses-free 3D to OLED laptops | Ars Technica

US Moves To Bar Noncompete Agreements in Labor Contracts

In a far-reaching move that could raise wages and increase competition among businesses, the Federal Trade Commission on Thursday unveiled a rule that would block companies from limiting their employees’ ability to work for a rival. From a report: The proposed rule would ban provisions of labor contracts known as noncompete agreements, which prevent workers from leaving for a competitor or starting a competing business for months or years after their employment, often within a certain geographic area. The agreements have applied to workers as varied as sandwich makers, hair stylists, doctors and software engineers.

Studies show that noncompetes, which appear to directly affect roughly 20 percent to 45 percent of private-sector U.S. workers, hold down pay because job switching is one of the more reliable ways of securing a raise. Many economists believe they help explain why pay for middle-income workers has stagnated in recent decades. Other studies show that noncompetes protect established companies from start-ups, reducing competition within industries. The arrangements may also harm productivity by making it hard for companies to hire workers who best fit their needs.

The F.T.C. proposal is the latest in a series of aggressive and sometimes unorthodox moves to rein in the power of large companies under the agency’s chair, Lina Khan. “Noncompetes block workers from freely switching jobs, depriving them of higher wages and better working conditions, and depriving businesses of a talent pool that they need to build and expand,” Ms. Khan said in a statement announcing the proposal. “By ending this practice, the F.T.C.’s proposed rule would promote greater dynamism, innovation and healthy competition.”

Source: US Moves To Bar Noncompete Agreements in Labor Contracts – Slashdot

200 Million Twitter Users’ Data for Sale on the Dark Web for $2

[…]

The short version of the latest drama is this: data stolen from Twitter more than a year ago found its way onto a major dark web marketplace this week. The asking price? The crypto equivalent of $2. In other words, it’s basically being given away for free. The hacker who posted the data haul, a user who goes by the moniker “StayMad,” shared the data on the market “Breached,” where anyone can now purchase and peruse it. The cache is estimated to cover at least 235 million people’s information.

[…]

According to multiple reports, the breach material includes the email addresses and/or phone numbers of some 235 million people, the credentials that users used to set up their accounts. This information has been paired with details publicly scraped from users’ profiles, thus allowing the cybercriminals to create more complete data dossiers on potential victims. Bleeping Computer reports that the information for each user includes not only email addresses and phone numbers but also names, screen names/user handles, follower count, and account creation date.

[…]

The data that appeared on “Breached” this week was actually stolen during 2021. Per the Washington Post, cybercriminals exploited an API vulnerability in Twitter’s platform to call up user information connected to hundreds of millions of user accounts. This bug created a bizarre “lookup” function, allowing any person to plug in a phone number or email to Twitter’s systems, which would then verify whether the credential was connected to an active account. The bug would also reveal which specific account was tied to the credential in question.

The vulnerability was originally discovered by Twitter’s bug bounty program in January of 2022 and was first publicly acknowledged last August.

[…]

 

Source: 200 Million Twitter Users’ Data for Sale on the Dark Web for $2

Californian law forces salary disclosure for companies > 15 people – fair and inclusive

The law affects every company with more than 15 employees looking to fill a job that could be performed from the state of California. It covers hourly and temporary work, all the way up to openings for highly paid technology executives.

That means it’s now possible to know the salaries top tech companies pay their workers. For example:

  • A program manager in Apple
  • ’s augmented reality group will receive base pay between $121,000 and $230,000 per year, according to an Apple posting Wednesday.
  • A midcareer software engineer at Google
  • Health can expect to make between $126,000 and $190,000 per year.
  • A director of software engineering at Meta

Notably, these salary listings do not include any bonuses or equity grants, which many tech companies use to attract and retain employees.

[…]

In the U.S., there are now 13 cities and states that require employers to share salary information, covering about 1 in 4 workers, according to Payscale, a software firm focusing on salary comparison.

California’s pay transparency law is intended to reduce gender and race pay gaps and help minorities and women better compete in the labor market. For example, people can compare their current pay with job listings with the same job title and see if they’re being underpaid.

Women earn about 83 cents for every dollar a man earns, according to the U.S. Census.

[…]

There are two primary components to California Senate Bill No. 1162, which was passed in September and went into effect Jan. 1.

First is the pay transparency component on job listings, which applies to any company with more than 15 employees if the job could be done in California.

The second part requires companies with more than 100 employees to submit a pay data report to the state of California with detailed salary information broken down by race, sex and job category. Companies have to provide a similar report on the federal level, but California now requires more details.

Employers are required to maintain detailed records of each job title and its wage history, and California’s labor commissioner can inspect those records. California can enforce the law through fines and can investigate violations. The reports won’t be published publicly under the new law.

[…]

The new law doesn’t require employers to post total compensation, meaning that companies can leave out information about stock grants and bonuses, offering an incomplete picture for some highly paid jobs.

For high-paying jobs in the technology industry, equity compensation in the form of restricted stock units can make up a large percentage of an employee’s take-home pay. In industries such as finance, bonuses make up a big portion of annual pay.

[…]

The new law also allows companies to provide wide ranges for pay, sometimes ranging over $100,000 or more between the lowest salary and the highest salary for a position. That seemingly violates the spirit of the law, but companies say the ranges are realistic because base pay can vary widely depending on skills, qualifications, experience and location.

[…]

Some California companies are not listing salaries for jobs clearly intended to be performed in other states, but advocates hope California’s new law could spark more salary disclosures around the country. After all, a job listing with an explicit starting salary or range is likely to attract more candidates than one with unclear pay.

[…]

Source: Here’s how much top tech jobs in California pay, according to job ads

Connected car security is very poor – fortunately they do actually take it seriously, fix bugs quickly

Multiple bugs affecting millions of vehicles from almost all major car brands could allow miscreants to perform any manner of mischief — in some cases including full takeovers —  by exploiting vulnerabilities in the vehicles’ telematic systems, automotive APIs and supporting infrastructure, according to security researchers.

Specifically, the vulnerabilities affect Mercedes-Benz, BMW, Rolls Royce, Ferrari, Ford, Porsche, Toyota, Jaguar and Land Rover, plus fleet management company Spireon and digital license plate company Reviver.

The research builds on Yuga Labs’ Sam Curry’s earlier car hacking expeditions that uncovered flaws affecting Hyundai and Genesis vehicles, as well as Hondas, Nissans, Infinitis and Acuras via an authorization flaw in Sirius XM’s Connected Vehicle Services.

All of the bugs have since been fixed.

“The affected companies all fixed the issues within one or two days of reporting,” Curry told The Register. ” We worked with all of them to validate them and make sure there weren’t any bypasses.”

[…]

Curry and the team discovered multiple vulnerabilities in SQL injection and authorization bypass to perform remote code execution across all of Spireon and fully take over any fleet vehicle.

“This would’ve allowed us to track and shut off starters for police, ambulances, and law enforcement vehicles for a number of different large cities and dispatch commands to those vehicles,” the researchers wrote.

The bugs also gave them full administrator access to Spireon and a company-wide administration panel from which an attacker could send arbitrary commands to all 15 million vehicles, thus remotely unlocking doors, honking horns, starting engines […]

[…]

With Ferrari, the researchers found overly permissive access controls that allowed them to access JavaScript code for several internal applications. The code contained API keys and credentials that could have allowed attackers to access customer records and take over (or delete) customer accounts.

[…]

a misconfigured single-sign on (SSO) portal for all employees and contractors of BMW, which owns Rolls-Royce, would have allowed access to any application behind the portal.

[…]

misconfigured SSO for Mercedes-Benz allowed the researchers to create a user account on a website intended for vehicle repair shops to request specific tools. They then used this account to sign in to the Mercedes-Benz Github, which held internal documentation and source code for various Mercedes-Benz projects including its Me Connect app used by customers to remotely connect to their vehicles.

The researchers reported this vulnerability to the automaker, and they noted that Mercedes-Benz “seemed to misunderstand the impact” and wanted further details about why this was a problem.

So the team used their newly created account credentials to login to several applications containing sensitive data. Then they “achieved remote code execution via exposed actuators, spring boot consoles, and dozens of sensitive internal applications used by Mercedes-Benz employees.”

One of these was the carmaker’s version of Slack. “We had permission to join any channel, including security channels, and could pose as a Mercedes-Benz employee who could ask whatever questions necessary for an actual attacker to elevate their privileges across the Benz infrastructure,” the researchers explained.

A Mercedes-Benz spokesperson confirmed that Curry contacted the company about the vulnerability and that it had been fixed.

[…]

vulnerabilities affecting Porsche’s telematics service that allowed them to remotely retrieve vehicle location and send vehicle commands.

Plus, they found an access-control vulnerability on the Toyota Financial app that disclosed the name, phone number, email address, and loan status of any customers. Toyota Motor Credit told The Register that it fixed the issue

[…]

Source: Here’s how to remotely takeover a Ferrari…account, that is • The Register

We Found Subscription Menus in Our BMW Test Car. And other models have different subscriptions. WTF BMW?

[…]

We were recently playing in the menus of a 2023 BMW X1 when we came across a group of screens offering exactly that sort of subscription. BMW TeleService and Remote Software Upgrade showed a message that read Activated, while BMW Drive Recorder had options to subscribe for one month, one year, three years, or “Unlimited.” Reactions from the Car and Driver staff were swift and emotional. One staff member responded to the menus with a vomiting emoji, while another likened the concept to a video-game battle pass.

We reached out to BMW to ask about the menus we found and to learn more about its plan for future subscriptions. The company replied that it doesn’t post a comprehensive list of prices online because of variability in what each car can receive. “Upgrade availability depends on factors such as model year, equipment level, and software version, so this keeps things more digestible for consumers,” explained one BMW representative.

Our X1 for example, has an optional $25-per-year charge for traffic camera alerts, but that option isn’t available to cars without BMW Live Cockpit. Instead of listing all the available options online, owners can see which subscriptions are available for their car either in the menus of the vehicle itself or from a companion app.

[…]

BMW USA may not want to confuse its customers by listing all its options in one place, but BMW Australia has no such reservations. In the land down under, heated front seats and a heated steering wheel are available in a month-to-month format, as is BMW’s parking assistant technology. In contrast, BMW USA released a statement in July saying that if a U.S.-market vehicle is ordered with heated seats from the factory, that option will remain functional throughout the life of the vehicle.

[…]

In 2019, BMW announced it would charge customers $80 per year for wireless Apple CarPlay. After considerable public backlash, BMW walked back the decision and instead offered the technology for free. BMW is wading into mostly uncharted waters here. The court of public opinion forced BMW to reverse a subscription in the past. If people decide these newer subscriptions are as egregious as the old ones, will they force BMW back again? Or will they instead stick to automakers who sell features outright?

Source: We Found Subscription Menus in Our BMW Test Car. Is That Bad?

If the hardware is there, then you bought it and should be allowed to have it. If it’s externally processed data (eg an updated database of streets and traffic cameras) then a subscription is fine.

John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

DoNotPay Offers $1M for Its AI to Argue Before Supreme Court

[…]

“DoNotPay will pay any lawyer or person $1,000,000 with an upcoming case in front of the United States Supreme Court to wear AirPods and let our robot lawyer argue the case by repeating exactly what it says,” Browder wrote on Twitter on Sunday night. “[W]e are making this serious offer, contingent on us coming to a formal agreement and all rules being followed.”

[…]

Although DoNotPay’s robot lawyer is set to make its debut in a U.S. courtroom next month to help someone contest a parking ticket, Browder wants the robot to go before the Supreme Court to address hypothetical skepticism about its abilities.

“We have upcoming cases in municipal (traffic) court next month. But the haters will say ‘traffic court is too simple for GPT,’” Browder tweeted.

[…]

DoNotPay started out as a simple chatbot back in 2015 to help people resolve basic but infuriating scenarios, such as canceling subscriptions or appealing parking tickets. In recent years, the company used AI to ramp up its robot lawyer’s capabilities, equipping it to dispute medical bills and successfully negotiate with Comcast.

[…]

Source: DoNotPay Offers $1M for Its AI to Argue Before Supreme Court

Gizmodo is incredibly disparaging of this idea, but they often are when faced with the future. And the legal profession is one of those in the most direct firing line of AI.

Meet GPTZero: The AI-Powered AI Plagiarism detection Program

[…]

Edward Tian, a college student studying computer science and journalism at Princeton University, recently created an app called GPTZero to help detect whether the text was written by AI or a human. The motivation behind the app was to help combat increasing AI plagiarism.

[…]

To analyze text, GPTZero uses metrics such as perplexity and burstiness. Perplexity measures how complex the text is, while burstiness measures how randomly it is written. This allows GPTZero to accurately detect whether an essay was written by a human or by ChatGPT.

[…]

Source: Meet GPTZero: The AI-Powered Anti-Plagiarism Program | by Liquid Ocelot | InkWater Atlas | Jan, 2023 | Medium

Of course universities are working along with AI developments instead of trying to stop them: University students are using AI to write essays. Teachers are learning how to embrace that

Edit 16/7/23 – Of course you have GPT minus 1 which takes your GPT output and scrambles it so that these GPT checkers can’t recognise it any more

LastPass is being sued following major cyberattack

[…]

According to the class action complaint filed in a Massachusetts court, names, usernames, billing addresses, email addresses, telephone numbers, and even the IP addresses used to access the service were all made available to wrongdoers.

The final straw in the hat could have been the leak of customers’ unencrypted vault data, which includes all manner of information ranging from website usernames and passwords to other secure notes and form data.

According to the lawsuit, “LastPass understood and appreciated the value of this Information yet chose to ignore it by failing to invest in adequate data security measures”.

The case’s plaintiff claims to have invested $53,000 in Bitcoin since July 2022, which was later “stolen” several months later, leading to police and FBI reports.

[…]

Source: LastPass is being sued following major cyberattack

There are more articles about LastPass on this blog. It seems they did not take their security quite as seriously as they led us to believe.

Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

A startup says it has begun releasing sulfur particles into Earth’s atmosphere, in a controversial attempt to combat climate change by deflecting sunlight. Make Sunsets, a company that sells carbon offset “cooling credits” for $10 each, is banking on solar geoengineering to cool down the planet and fill its coffers. The startup claims it has already released two test balloons, each filled with about 10 grams of sulfur particles and intended for the stratosphere, according to the company’s website and first reported on by MIT Technology Review.

The concept of solar geoengineering is simple: Add reflective particles to the upper atmosphere to reduce the amount of sunlight that penetrates from space, thereby cooling Earth. It’s an idea inspired by the atmospheric side effects of major volcanic eruptions, which have led to drastic, temporary climate shifts multiple times throughout history, including the notorious “year without a summer” of 1816.

Yet effective and safe implementation of the idea is much less simple. Scientists and engineers have been studying solar geoengineering as a potential climate change remedy for more than 50 years. But almost nobody has actually enacted real-world experiments because of the associated risks, like rapid changes in our planet’s precipitation patterns, damage to the ozone layer, and significant geopolitical ramifications.

[…]

if and when we get enough sulfur into the atmosphere to meaningfully cool Earth, we’d have to keep adding new particles indefinitely to avoid entering an era of climate change about four to six times worse than what we’re currently experiencing, according to one 2018 study. Sulfur aerosols don’t stick around very long. Their lifespan in the stratosphere is somewhere between a few days and a couple years, depending on particle size and other factors.

[…]

Rogue agents independently deciding to impose geoengineering on the rest of us has been a concern for as long as the thought of intentionally manipulating the atmosphere has been around. The Pentagon even has dedicated research teams working on methods to detect and combat such clandestine attempts. But effectively defending against solar geoengineering is much more difficult than just doing it.

In Iseman’s rudimentary first trials, he says he released two weather balloons full of helium and sulfur aerosols somewhere in Baja California, Mexico. The founder told MIT Technology Review that the balloons rose toward the sky but, beyond that, he doesn’t know what happened to them, as the balloons lacked tracking equipment. Maybe they made it to the stratosphere and released their payload, maybe they didn’t.

[…]

Iseman and Make Sunsets claim that a single gram of sulfur aerosols counteracts the warming effects of one ton of CO2. But there is no clear scientific basis for such an assertion, geoengineering researcher Shuchi Talati told the outlet. And so the $10 “cooling credits” the company is hawking are likely bunk (along with most carbon credit/offset schemes.)

Even if the balloons made it to the stratosphere, the small amount of sulfur released wouldn’t be enough to trigger significant environmental effects, said David Keith to MIT Technology Review.

[…]

The solution to climate change is almost certainly not a single maverick “disrupting” the composition of Earth’s stratosphere. But that hasn’t stopped Make Sunsets from reportedly raising nearly $750,000 in funds from venture capital firms. And for just ~$29,250,000 more per year, the company claims it can completely offset current warming. It’s not a bet we recommend taking.

Source: Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

University students are using AI to write essays. Teachers are learning how to embrace that

As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write.

Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of stringing together words to communicate thoughts is often viewed as something complex, mysterious, and unmistakably human. No wonder people are fascinated by machines that can write too.

[…]

Although AI can generate text with perfect spelling, great grammar and syntax, the content often isn’t that good beyond a few paragraphs. The writing becomes less coherent over time with no logical train of thought to follow. Language models fail to get their facts right – meaning quotes, dates, and ideas are likely false. Students will have to inspect the writing closely and correct mistakes for their work to be convincing.

Prof: AI-assisted essays ‘not good’

Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by software.

In an opinion article on Inside Higher Ed, Graham said the AI-assisted essays were “not good,” noting that the best of the bunch would have earned a C or C-minus grade. To score higher, students would have had to rewrite more of the essay using their own words to improve it, or craft increasingly narrower and specific prompts to get back more useful content.

“You’re not going to be able to push a button or submit a short prompt and generate a ready-to-go essay,” he told The Register.

[…]

“I think if students can do well with AI writing, it’s not actually all that different from them doing well with their own writing. The main skills I teach and assess mostly happen after the initial drafting,” he said.

“I think that’s where people become really talented writers; it’s in the revision and the editing process. So I’m optimistic about [AI] because I think that it will provide a framework for us to be able to teach that revision and editing better.

“Some students have a lot of trouble sometimes generating that first draft. If all the effort goes into getting them to generate that first draft, and then they hit the deadline, that’s what they will submit. They don’t get a chance to revise, they don’t get a chance to edit. If we can use those systems to speed write the first draft, it might really be helpful,” he opined.

[…]

Listicles, informal blog posts, or news articles will be easier to imitate than niche academic papers or literary masterpieces. Teachers will need to be thoughtful about the essay questions they set and make sure students’ knowledge are really being tested, if they don’t want them to cut corners.

[…]

“The onus now is on writing teachers to figure out how to get to the same kinds of goals that we’ve always had about using writing to learn. That includes students engaging with ideas, teaching them how to formulate thoughts, how to communicate clearly or creatively. I think all of those things can be done with AI systems, but they’ll be done differently.”

The line between using AI as a collaborative tool or a way to cheat, however, is blurry. None of the academics teaching writing who spoke to The Register thought students should be banned from using AI software. “Writing is fundamentally shaped by technology,” Vee said.

“Students use spell check and grammar check. If I got a paper where a student didn’t use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn’t know how to spell so they would teach spelling. Now they don’t.”

Most teachers, however, told us they would support regulating the use of AI-writing software in education

[…]

Mills was particularly concerned about AI reducing the need for people to think for themselves, considering language models carry forward biases in their training data. “Companies have decided what to feed it and we don’t know. Now, they are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it’s very dangerous.”

Lauren Goodlad, professor of English and Comparative Literature at Rutgers University, agreed. If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example.

[…]

“As teachers, we are experimenting, not panicking,” Monroe told The Register.

“We want to empower our students as writers and thinkers. AI will play a role… This is a time of exciting and frenzied development, but educators move more slowly and deliberately… AI will be able to assist writers at every stage, but students and teachers will need tools that are thoughtfully calibrated.”

[…]

 

Source: University students are using AI to write essays. Now what? • The Register

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

The LastPass disclosure of leaked password vaults is being torn apart by security experts

Last week, just before Christmas, LastPass dropped a bombshell announcement: as the result of a breach in August, which led to another breach in November, hackers had gotten their hands on users’ password vaults. While the company insists that your login information is still secure, some cybersecurity experts are heavily criticizing its post, saying that it could make people feel more secure than they actually are and pointing out that this is just the latest in a series of incidents that make it hard to trust the password manager.

LastPass’ December 22nd statement was “full of omissions, half-truths and outright lies,” reads a blog post from Wladimir Palant, a security researcher known for helping originally develop AdBlock Pro, among other things. Some of his criticisms deal with how the company has framed the incident and how transparent it’s being; he accuses the company of trying to portray the August incident where LastPass says “some source code and technical information were stolen” as a separate breach when he says that in reality the company “failed to contain” the breach.

He also highlights LastPass’ admission that the leaked data included “the IP addresses from which customers were accessing the LastPass service,” saying that could let the threat actor “create a complete movement profile” of customers if LastPass was logging every IP address you used with its service.

Another security researcher, Jeremi Gosney, wrote a long post on Mastodon explaining his recommendation to move to another password manager. “LastPass’s claim of ‘zero knowledge’ is a bald-faced lie,” he says, alleging that the company has “about as much knowledge as a password manager can possibly get away with.”

LastPass claims its “zero knowledge” architecture keeps users safe because the company never has access to your master password, which is the thing that hackers would need to unlock the stolen vaults. While Gosney doesn’t dispute that particular point, he does say that the phrase is misleading. “I think most people envision their vault as a sort of encrypted database where the entire file is protected, but no — with LastPass, your vault is a plaintext file and only a few select fields are encrypted.”

Palant also notes that the encryption only does you any good if the hackers can’t crack your master password, which is LastPass’ main defense in its post: if you use its defaults for password length and strengthening and haven’t reused it on another site, “it would take millions of years to guess your master password using generally-available password-cracking technology” wrote Karim Toubba, the company’s CEO.

“This prepares the ground for blaming the customers,” writes Palant, saying that “LastPass should be aware that passwords will be decrypted for at least some of their customers. And they have a convenient explanation already: these customers clearly didn’t follow their best practices.” However, he also points out that LastPass hasn’t necessarily enforced those standards. Despite the fact that it made 12-character passwords the default in 2018, Palant says, “I can log in with my eight-character password without any warnings or prompts to change it.”

LastPass’ post has even elicited a response from a competitor, 1Password — on Wednesday, the company’s principal security architect Jeffrey Goldberg wrote a post for its site titled “Not in a million years: It can take far less to crack a LastPass password.” In it, Goldberg calls LastPass’ claim of it taking a million years to crack a master password “highly misleading,” saying that the statistic appears to assume a 12 character, randomly generated password. “Passwords created by humans come nowhere near meeting that requirement,” he writes, saying that threat actors would be able to prioritize certain guesses based on how people construct passwords they can actually remember.

Of course, a competitor’s word should probably be taken with a grain of salt, though Palant echos a similar idea in his post — he claims the viral XKCD method of creating passwords would take around 3 years to guess with a single GPU, while some 11-character passwords (that many people may consider to be good) would only take around 25 minutes to crack with the same hardware. It goes without saying that a motivated actor trying to crack into a specific target’s vault could probably throw more than one GPU at the problem, potentially cutting that time down by orders of magnitude.

Both Gosney and Palant take issue with LastPass’ actual cryptography too, though for different reasons. Gosney accuses the company of basically committing “every ‘crypto 101’ sin” with how its encryption is implemented and how it manages data once it’s been loaded into your device’s memory.

Meanwhile, Palant criticizes the company’s post for painting its password-strengthening algorithm, known as PBKDF2, as “stronger-than-typical.” The idea behind the standard is that it makes it harder to brute-force guess your passwords, as you’d have to perform a certain number of calculations on each guess. “I seriously wonder what LastPass considers typical,” writes Palant, “given that 100,000 PBKDF2 iterations are the lowest number I’ve seen in any current password manager.”

[…]

Source: The LastPass disclosure of leaked password vaults is being torn apart by security experts – The Verge

EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

As smartphone manufacturers are improving the ear speakers in their devices, it can become easier for malicious actors to leverage a particular side-channel for eavesdropping on a targeted user’s conversations, according to a team of researchers from several universities in the United States.

The attack method, named EarSpy, is described in a paper published just before Christmas by researchers from Texas A&M University, Temple University, New Jersey Institute of Technology, Rutgers University, and the University of Dayton.

EarSpy relies on the phone’s ear speaker — the speaker at the top of the device that is used when the phone is held to the ear — and the device’s built-in accelerometer for capturing the tiny vibrations generated by the speaker.

[…]

Android security has improved significantly and it has become increasingly difficult for malware to obtain the required permissions.

On the other hand, accessing raw data from the motion sensors in a smartphone does not require any special permissions. Android developers have started placing some restrictions on sensor data collection, but the EarSpy attack is still possible, the researchers said.

A piece of malware planted on a device could use the EarSpy attack to capture potentially sensitive information and send it back to the attacker.

[…]

The researchers discovered that attacks such as EarSpy are becoming increasingly feasible due to the improvements made by smartphone manufacturers to ear speakers. They conducted tests on the OnePlus 7T and the OnePlus 9 smartphones — both running Android — and found that significantly more data can be captured by the accelerometer from the ear speaker due to the stereo speakers present in these newer models compared to the older model OnePlus phones, which did not have stereo speakers.

The experiments conducted by the academic researchers analyzed the reverberation effect of ear speakers on the accelerometer by extracting time-frequency domain features and spectrograms. The analysis focused on gender recognition, speaker recognition, and speech recognition.

In the gender recognition test, whose goal is to determine whether the target is male or female, the EarSpy attack had a 98% accuracy. The accuracy was nearly as high, at 92%, for detecting the speaker’s identity.

When it comes to actual speech, the accuracy was up to 56% for capturing digits spoken in a phone call.

Source: EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

ETSI’s Activities in Artificial Intelligence: White Paper

[…]

This White Paper entitled ETSI Activities in the field of Artificial Intelligence supports all stakeholders and summarizes ongoing effort in ETSI and planned future activities. It also includes an analysis on how ETSI deliverables may support current policy initiatives in the field of artificial intelligence.  A section of the document outlines ETSI activities of relevance to address Societal Challenges in AI while another addresses the involvement of the European Research Community.

AI activities in ETSI also rely on a unique testing experts’ community to ensure independently verifiable and repeatable testing of essential requirements in the field of AI. ETSI engages with its highly recognised Human Factors community to develop solutions on Human Oversight of AI systems.

AI requires a multitude of distinct expertise where, often, AI is not the end goal but a means to achieve the goal. For this reason, ETSI has chosen to implement a distributed approach to AI – specialized communities meet in technically focused groups. Examples include the technical committee Cyber with a specific focus on Cybersecurity aspects, ISG SAI working towards securing AI systems, ISG ENI dealing with the question of how to integrate AI into a network architecture. These are three of the thirteen groups currently working on AI related technologies within ETSI. The first initiative dates back to 2016 with the publication of a White Paper describing GANA (the Generic Autonomic Networking Architecture).

[…]

Source: ETSI – ETSI’s Activities in Artificial Intelligence: Read our New White Paper

Two people charged with hacking Ring security cameras to livestream swattings

In a reminder of smart home security’s dark side, two people hacked Ring security cameras to livestream swattings, according to a Los Angeles grand jury indictment (according to a report from Bloomberg). The pair called in hoax emergencies to authorities and livestreamed the police response on social media in late 2020.

James Thomas Andrew McCarty, 20, of Charlotte, North Carolina, and Kya Christian Nelson, 21, of Racine, Wisconsin, hacked into Yahoo email accounts to gain access to 12 Ring cameras across nine states in November 2020 (disclaimer: Yahoo is Engadget’s parent company). In one of the incidents, Nelson claimed to be a minor reporting their parents for firing guns while drinking alcohol. When police arrived, the pair used the Ring cameras to taunt the victims and officers while livestreaming — a pattern appearing in several incidents, according to prosecutors.

[…]

Although the smart devices can deter things like robberies and “porch pirates,” Amazon admits to providing footage to police without user consent or a court order when it believes someone is in danger. Inexplicably, the tech giant made a zany reality series using Ring footage, which didn’t exactly quell concerns about the tech’s Orwellian side.

Source: Two people charged with hacking Ring security cameras to livestream swattings | Engadget

Amazing that people don’t realise that Amazon is creating a total and constant surveillance system with hardware that you paid for.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot

OpenAI releases Point-E, an AI that generates 3D point clouds / meshes

[…] This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

[…]

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt.

[…]

Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a generative 3D system that the company unveiled back in 2021. Unlike Dream Fields, DreamFusion requires no prior training, meaning that it can generate 3D representations of objects without 3D data.

[…]

Source: OpenAI releases Point-E, an AI that generates 3D models | TechCrunch

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular

ou may have noticed the world getting excited about the capabilities of ChatGPT, a text-based AI chat bot. Similarly, some are getting quite worked up over generative AI systems that can turn text prompts into images, including those mimicking the style of particular artists. But less remarked upon is the use of AI in the world of music. Music Business Worldwide has written two detailed news stories on the topic. The first comes from China:

Tencent Music Entertainment (TME) says that it has created and released over 1,000 tracks containing vocals created by AI tech that mimics the human voice.

And get this: one of these tracks has already surpassed 100 million streams.

Some of these songs use synthetic voices based on human singers, both dead and alive:

TME also confirmed today (November 15) that – in addition to “paying tribute” to the vocals of dead artists via the Lingyin Engine – it has also created “an AI singer lineup with the voices of trending [i.e currently active] stars such as Yang Chaoyue, among others”.

The copyright industry will doubtless have something to say about that. It is also unlikely to be delighted by the second Music Business Worldwide story about AI-generated music, this time in the Middle East and North Africa (MENA) market:

MENA-focused Spotify rival, Anghami, is now taking the concept to a whole other level – claiming that it will soon become the first platform to host over 200,000 songs generated by AI.

Anghami has partnered with a generative music platform called Mubert, which says it allows users to create “unique soundtracks” for various uses such as social media, presentations or films using one million samples from over 4,000 musicians.

According to Mohammed Ogaily, VP Product at Anghami, the service has already “generated over 170,000 songs, based on three sets of lyrics, three talents, and 2,000 tracks generated by AI”.

It’s striking that the undoubtedly interesting but theoretical possibilities of ChatGPT and generative AI art are dominating the headlines, while we hear relatively little about these AI-based music services that are already up and running, and hugely popular with listeners. It’s probably a result of the generally parochial nature of mainstream Western media, which often ignores the important developments happening elsewhere.

Source: The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular | Techdirt

AI-Created Comic Has Copyright Protection Revoked by US

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using “A.I. art,” and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a “prompt engineer” and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

[…]

Source: AI-Created Comic Has Been Deemed Ineligible for Copyright Protection

I guess there is no big corporate interest in lobbying for AI created content – yet – and so the copyright masters have no idea what to do without their corporate cash carrying masters telling them what to do.