New York Times Sues OpenAI and Microsoft Over Reading Publicly Available Information

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

In its complaint, The Times said it approached Microsoft and OpenAI in April to raise concerns about the use of its intellectual property and explore “an amicable resolution,” possibly involving a commercial agreement and “technological guardrails” around generative A.I. products. But it said the talks had not produced a resolution.

An OpenAI spokeswoman, Lindsey Held, said in a statement that the company had been “moving forward constructively” in conversations with The Times and that it was “surprised and disappointed” by the lawsuit.

“We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from A.I. technology and new revenue models,” Ms. Held said. “We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers.”

[…]

Source: New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work – The New York Times

Well, if they didn’t want anyone to read it – which is really what an AI is doing, just as much as you or I do – then they should have put the content behind a paywall.

All Apples Wide open for 4 years, Kaspersky security company and many others in Moscow opened wide – photos, location, mic, etc – just by sending them an imessage. Shows how dangerous closed source is.

[…]

after about 12 months of intensive investigation. Besides how the attackers learned of the hardware feature, the researchers still don’t know what, precisely, its purpose is. Also unknown is if the feature is a native part of the iPhone or enabled by a third-party hardware component such as ARM’s CoreSight

 

The mass backdooring campaign, which according to Russian officials also infected the iPhones of thousands of people working inside diplomatic missions and embassies in Russia, according to Russian government officials, came to light in June. Over a span of at least four years, Kaspersky said, the infections were delivered in iMessage texts that installed malware through a complex exploit chain without requiring the receiver to take any action.

With that, the devices were infected with full-featured spyware that, among other things, transmitted microphone recordings, photos, geolocation, and other sensitive data to attacker-controlled servers. Although infections didn’t survive a reboot, the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.

A fresh infusion of details disclosed Wednesday said that “Triangulation”—the name Kaspersky gave to both the malware and the campaign that installed it—exploited four critical zero-day vulnerabilities, meaning serious programming flaws that were known to the attackers before they were known to Apple. The company has since patched all four of the vulnerabilities, which are tracked as:

Besides affecting iPhones, these critical zero-days and the secret hardware function resided in Macs, iPods, iPads, Apple TVs, and Apple Watches. What’s more, the exploits Kaspersky recovered were intentionally developed to work on those devices as well. Apple has patched those platforms as well. Apple declined to comment for this article.

[…]

“This is no ordinary vulnerability,” Larin said in a press release that coincided with a presentation he made at the 37th Chaos Communication Congress in Hamburg, Germany. “Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming, requiring a comprehensive understanding of both hardware and software architectures. What this discovery teaches us once again is that even advanced hardware-based protections can be rendered ineffective in the face of a sophisticated attacker, particularly when there are hardware features allowing to bypass these protections.”

In a research paper also published Wednesday, Larin added:

If we try to describe this feature and how attackers use it, it all comes down to this: attackers are able to write the desired data to the desired physical address with [the] bypass of [a] hardware-based memory protection by writing the data, destination address and hash of data to unknown, not used by the firmware, hardware registers of the chip.

Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or was included by mistake. Since this feature is not used by the firmware, we have no idea how attackers would know how to use it

On the same day last June that Kaspersky first disclosed Operation Triangulation had infected the iPhones of its employees, officials with the Russian National Coordination Center for Computer Incidents said the attacks were part of a broader campaign by the US National Security Agency that infected several thousand iPhones belonging to people inside diplomatic missions and embassies in Russia, specifically from those representing NATO countries, post-Soviet nations, Israel, and China. A separate alert from the FSB, Russia’s Federal Security Service, alleged Apple cooperated with the NSA in the campaign. An Apple representative has denied the claim. Kaspersky researchers, meanwhile, have said they have no evidence corroborating the claim of involvement by either the NSA or Apple.

[…]

Kaspersky’s summary of the exploit chain is:

  • Attackers send a malicious iMessage attachment, which is processed by the application without showing any signs to the user
  • This attachment exploits vulnerability CVE-2023-41990 in the undocumented, Apple-only TrueType font instruction ADJUST for a remote code execution. This instruction existed since the early 90’s and the patch removed it.
  • It uses return/jump oriented programming, multiple stages written in NSExpression/NSPredicate query language, patching JavaScriptCore library environment to execute a privilege escalation exploit written in JavaScript.
  • This JavaScript exploit is obfuscated to make it completely unreadable and to minimize its size. Still it has around 11000 lines of code which are mainly dedicated to JavaScriptCore and kernel memory parsing and manipulation.
  • It’s exploited JavaScriptCore’s debugging feature DollarVM ($vm) to get the ability to manipulate JavaScriptCore’s memory from the script and execute native API functions.
  • It was designed to support old and new iPhones and included a Pointer Authentication Code (PAC) bypass for exploitation of newer models.
  • It used an integer overflow vulnerability CVE-2023-32434 in the XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to get read/write access to [the] whole physical memory of the device from the user level.
  • It uses hardware memory-mapped I/O (MMIO) registers to bypass Page Protection Layer (PPL). This was mitigated as CVE-2023-38606.
  • After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device and run spyware, but attackers chose to: a) launch the imagent process and inject a payload that cleans the exploitation artifacts from the device; b) run the Safari process in invisible mode and forward it to the web page with the next stage.
  • Web page has the script that verifies the victim and, if the checks pass, it receives the next stage—the Safari exploit.
  • Safari exploit uses vulnerability CVE-2023-32435 to execute a shellcode.
  • Shellcode executes another kernel exploit in the form of mach object file. It uses the same vulnerabilities CVE-2023-32434 and CVE-2023-38606, it’s also massive in size and functionality, but it is completely different from the kernel exploit written in JavaScript. Only some parts related to exploitation of the above-mentioned vulnerabilities are the same. Still most of its code is also dedicated to the parsing and manipulation of the kernel memory. It has various post-exploitation utilities, which are mostly unused.
  • Exploit gets root privileges and proceeds to execute other stages responsible for loading of spyware. We already covered these stages in our previous posts.

Wednesday’s presentation, titled What You Get When You Attack iPhones of Researchers, is a further reminder that even in the face of innovative defenses like the one protecting the iPhone kernel, ever more sophisticated attacks continue to find ways to defeat them.

[…]

Source: 4-year campaign backdoored iPhones using possibly the most advanced exploit ever | Ars Technica

It also shows that closed source software is an immense security threat – even with the threat exposed it’s almost impossible to find out what happened and how to fix it – especially without the help of the manufacturer

Linux is the only OS to support diagonal PC monitor mode — dev champions the case for 22-degree-rotation computing

Here’s a fun tidbit — Linux is the only OS to support a diagonal monitor mode, which you can customize to any tilt of your liking. Latching onto this possibility, a Linux developer who grew dissatisfied with the extreme choices offered by the cultural norms of landscape or portrait monitor usage is championing diagonal mode computing. Melbourne-based xssfox asserts that the “perfect rotation” for software development is 22° (h/t Daniel Feldman).

[…]

Xssfox devised a consistent method to appraise various screen rotations, working through the staid old landscape and portrait modes, before deploying xrandr to test rotations like the slightly skewed 1° and an indecisive 45°. These produced mixed results of questionable benefits, so the search for the Goldilocks solution continued.

It turns out that a 22° tilt to the left (expand tweet above to see) was the sweet spot for xssfox. This rotation delivered the best working screen space on what looks like a 32:9 aspect ratio monitor from Dell. “So this here, I think, is the best monitor orientation for software development,” the developer commented. “It provides the longest line lengths and no longer need to worry about that pesky 80-column limit.”

[…]

We note that Windows users with AMD and Nvidia drivers are currently shackled to applying screen rotations using 90° steps. MacOS users apparently face the same restrictions.

Source: Linux is the only OS to support diagonal PC monitor mode — dev champions the case for 22-degree-rotation computing | Tom’s Hardware

Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It – because no US enforcement of any kind

Half a decade ago we documented how the U.S. wireless industry was caught over-collecting sensitive user location and vast troves of behavioral data, then selling access to that data to pretty much anybody with a couple of nickels to rub together. It resulted in no limit of abuse from everybody from stalkers to law enforcement — and even to people pretending to be law enforcement.

While the FCC purportedly moved to fine wireless companies for this behavior, the agency still hasn’t followed through. Despite the obvious ramifications of this kind of behavior during a post-Roe, authoritarian era.

Nearly a decade later, and it’s still a very obvious problem. The folks over at 404 Media have documented the case of a stalker who managed to game Verizon in order to obtain sensitive data about his target, including her address, location data, and call logs.

Her stalker posed as a police officer (badly) and, as usual, Verizon did virtually nothing to verify his identity:

“Glauner’s alleged scheme was not sophisticated in the slightest: he used a ProtonMail account, not a government email, to make the request, and used the name of a police officer that didn’t actually work for the police department he impersonated, according to court records. Despite those red flags, Verizon still provided the sensitive data to Glauner.”

In this case, the stalker found it relatively trivial to take advantage of Verizon Security Assistance and Court Order Compliance Team (or VSAT CCT), which verifies law enforcement requests for data. You’d think that after a decade of very ugly scandals on this front Verizon would have more meaningful safeguards in place, but you’d apparently be wrong.

Keep in mind: the FCC tried to impose some fairly basic privacy rules for broadband and wireless in 2016, but the telecom industry, in perfect lockstep with Republicans, killed those efforts before they could take effect, claiming they’d be too harmful for the super competitive and innovative (read: not competitive or innovative at all) U.S. broadband industry.

[…]

Source: Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It | Techdirt

UK Police to be able to run AI face recognition searches on all driving licence holders

The police will be able to run facial recognition searches on a database containing images of Britain’s 50 million driving licence holders under a law change being quietly introduced by the government.

Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match.

The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

[…]

The intention to allow the police or the National Crime Agency (NCA) to exploit the UK’s driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is “sneaking it under the radar”.

Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish “driver information regulations” to enable the searches, but he will need only to consult police bodies, according to the bill.

Critics claim facial recognition technology poses a threat to the rights of individuals to privacy, freedom of expression, non-discrimination and freedom of assembly and association.

Police are increasingly using live facial recognition, which compares a live camera feed of faces against a database of known identities, at major public events such as protests.

Prof Peter Fussey, a former independent reviewer of the Met’s use of facial recognition, said there was insufficient oversight of the use of facial recognition systems, with ministers worryingly silent over studies that showed the technology was prone to falsely identifying black and Asian faces.

[…]

The EU had considered making images on its member states’ driving licence records available on the Prüm crime fighting database. The proposal was dropped earlier this year as it was said to represent a disproportionate breach of privacy.

[…]

Carole McCartney, a professor of law and criminal justice at the University of Leicester, said the lack of consultation over the change in law raised questions over the legitimacy of the new powers.

She said: “This is another slide down the ‘slippery slope’ of allowing police access to whatever data they so choose – with little or no safeguards. Where is the public debate? How is this legitimate if the public don’t accept the use of the DVLA and passport databases in this way?”

The government scrapped the role of the commissioner for the retention and use of biometric material and the office of surveillance camera commissioner this summer, leaving ministers without an independent watchdog to scrutinise such legislative changes.

[…]

In 2020, the court of appeal ruled that South Wales police’s use of facial recognition technology had breached privacy rights, data protection laws and equality laws, given the risk the technology could have a race or gender bias.

The force has continued to use the technology. Live facial recognition is to be deployed to find a match of people attending Christmas markets this year against a watchlist.

Katy Watts, a lawyer at the civil rights advocacy group Liberty said: “This is a shortcut to widespread surveillance by the state and we should all be worried by it.”

Source: Police to be able to run face recognition searches on 50m driving licence holders | Facial recognition | The Guardian

Tesla Systematically Lied To Customers, Blaming Them For Shoddy Parts The Company Knew Were Defective, has highest accident rate of any brand on the road

Back in July, Reuters released a bombshell report showing that not only has Tesla aggressively lied about its EV ranges for the better part of the last decade, it created teams whose entire purpose was to lie to customers about it when they called up to complain. The story lasted all of two days in the news cycle before it was supplanted by clickbait stories about a billionaire fist fight that never actually happened.

Now Reuters is back again, with another major story showcasing how for much of that same decade, Tesla routinely blamed customers for the failure of substandard parts the company knew to be defective. The outlet reviewed thousands of Tesla documents and found a pattern where customers would complain about dangerously broken and low-quality parts, only to be repeatedly gaslit by the company:

“Wheels falling off cars at speed. Suspensions collapsing on brand-new vehicles. Axles breaking under acceleration. Tens of thousands of customers told Tesla about a host of part failures on low-mileage cars. The automaker sought to blame drivers for vehicle ‘abuse,’ but Tesla documents show it had tracked the chronic ‘flaws’ and ‘failures’ for years.”

The records show a repeated pattern across tens of thousands of customers where parts would fail, then the customer would be accused of “abusing” their vehicle. They also show that Tesla meticulously tracked part failures, knew many parts were defective, and routinely not only lied to regulators about it, but charged customers to repair parts they knew had high failure rates and were systemically prone to failure:

“Yet the company has denied some of the suspension and steering problems in statements to U.S. regulators and the public– and, according to Tesla records, sought to shift some of the resulting repair costs to customers.”

This is obviously a very different narrative than the one Musk presented last month at that unhinged New York Times DealBook event:

“We make the best cars. Whether you hate me, like me or are indifferent, do you want the best car, or do you not want the best car?”

They are, as it turns out, not the best cars.

And this is before you even touch on the growing pile of corpses caused by the company’s half-cooked and repeatedly misrepresented “full self driving” technology, which last week resulted in the recall of nearly every vehicle that has it. That problem was, as reports have documented in detail, thanks in part to non-engineer Musk over-ruling his actual engineers when it comes to only using cameras.

This comes as a new study shows that Tesla vehicles have the highest accident rate of any brand on the road. As usual, U.S. regulators have generally been asleep or lethargic during most of this, worried that enforcing basic public safety standards would somehow be stifling “innovation.”

The deaths from “full self driving” have been going on for the better part of the last decade, yet the NHTSA only just apparently figured out where its pants were located. But a lot of the problems Reuters have revealed should be slam dunk cases for the FTC under the “unfair and deceptive” component of the FTC Act, creating what will likely be a very busy 2024 for Elon Musk.

A lot of this stuff has been discussed by Tesla critics for years. It’s only once Musk began his downward descent into full racist caricature and undeniable self-immolation that press outlets with actual resources started to meaningfully dig beyond the hype. There’s cause for some significant U.S. journalism introspection as to why that is that probably will never happen.

Meanwhile, for a supposed innovation super-genius, most Musk companies have the kind of customer service that makes Comcast seem empathic and competent.

There’s no shortage of nightmare stories about Tesla Solar customer service. And we’ve well documented how Starlink can’t even respond to basic email inquiries by users tired of being on year-long waiting lists and seeking refunds. And once you burn past the novelty, gimmicks, and fanboy denialism, Tesla automotive clearly isn’t any better.

That said, this goes well beyond just bad customer service. The original Reuters story from July about the company lying about EV ranges clearly demonstrates not just bad customer service, but profound corporate culture rot:

“Inside the Nevada team’s office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.”

As with much of what Musk does, a large share of what the press initially sold the public as unbridled innovation was really just cutting corners. It’s easy to accomplish more than the next guy when you refuse to invest in customer service, don’t care about labor or environmental laws, don’t care about public safety, don’t care about the customer, and have zero compulsion about lying to regulators or making things up at every conceivable opportunity.

Source: Tesla Lied To Customers, Blaming Them For Shoddy Parts The Company Knew Were Defective | Techdirt

Slovakian PM wants to kill EU anti-corruption policing

Prime Minister Robert Fico’s push dissolve the body that now oversees high-profile corruption cases poses a risk to the EU’s financial interests and would harm the work of the European Public Prosecutor’s Office, Juraj Novocký, Slovakia’s representative to the EU body, told Euractiv Slovakia.

Fico’s government wants to pass a reform that would eliminate the Special Anti-Corruption Prosecutor’s Office, reduce penalties, including those for corruption, and curtail the rights of whistleblowers.

Novocký points out that the reform would also bring a radical shortening of limitation periods: “Through a thorough analysis, we have found that if the amendment is adopted as proposed, we will have to stop prosecution in at least twenty cases for this reason,” Novocký of the European Public Prosecutor’s Office (EPPO) told Euractiv Slovakia.

“This has a concrete effect on the EPPO’s activities and indirectly on the protection of the financial interests of the EU because, in such cases, there will be no compensation for the damage caused,” Novocký added.

On Monday, EU Chief Prosecutor Laura Kövesi addressed the government’s push for reform in a letter to the European Commission, concluding that it constitutes a serious risk of breaching the rule of law in the meaning of Article 4(2)(c) of the Conditionality Regulation.

[…]

Source: Fico’s corruption reforms may block investigations in 20 EU fraud cases – EURACTIV.com

AI cannot be patent ‘inventor’, UK Supreme Court rules in landmark case – but a company can

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights.

Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his “creativity machine” called DABUS.

His attempt to register the patents was refused by the UK’s Intellectual Property Office (IPO) on the grounds that the inventor must be a human or a company, rather than a machine.

Thaler appealed to the UK’s Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law “an inventor must be a natural person”.

Judge David Kitchin said in the court’s written ruling that the case was “not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable”.

Thaler’s lawyers said in a statement that the ruling “establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines and as a consequence wholly inadequate in supporting any industry that relies on AI in the development of new technologies”.

‘LEGITIMATE QUESTIONS’

A spokesperson for the IPO welcomed the decision “and the clarification it gives as to the law as it stands in relation to the patenting of creations of artificial intelligence machines”.

They added that there are “legitimate questions as to how the patent system and indeed intellectual property more broadly should handle such creations” and the government will keep this area of law under review.

[…]

“The judgment does not preclude a person using an AI to devise an invention – in such a scenario, it would be possible to apply for a patent provided that person is identified as the inventor.”

In a separate case last month, London’s High Court ruled that artificial neural networks can attract patent protection under UK law.

Source: AI cannot be patent ‘inventor’, UK Supreme Court rules in landmark case | Reuters

Somehow it sits strangely that a company can be a ‘natural person’ but an AI cannot.

Apple Pay, Apple Card and Wallet were down for some users this morning – again

Apple’s financial services, including Apple Pay, Apple Cash, Apple Card and Wallet, experienced service disruptions for some users between 6:15 AM and 6:49 AM Eastern this morning, according to the company’s System Status page. As AppleInsider notes, it’s unclear how widespread the issues were, but the company has experienced intermittent Apple Pay issues earlier this year.

[…]

Source: Apple Pay, Apple Card and Wallet were down for some users this morning

Microsoft is killing Windows mixed reality platform

Windows Mixed Reality is heading to a farm upstate. Microsoft is shutting down the platform, according to an official list of deprecated Windows features. This includes the garden variety Windows Mixed Reality software, along with the Mixed Reality Portal app and the affiliated Steam VR app. The platform isn’t gone yet, but Microsoft says it’ll be “removed in a future release of Windows.”

Microsoft first unveiled Windows Mixed Reality back in 2017 as its attempt to compete with rivals in the VR space, like HTC and Oculus (which is now owned by Meta.) We were fascinated by the tech when it first launched, as it offered the ability for in-person shared mixed reality.

[…]

Microsoft’s platform was ultimately adopted by several VR headsets, like the HP Reverb G2 and others manufactured by companies like Acer, Asus and Samsung. The Windows Mixed Reality Portal app allowed access to games, experiences and plenty of work-related productivity apps. However, it looks like the adoption rate wasn’t up to snuff, as indicated by today’s news.

Despite the imminent end to the platform, it doesn’t look to be impacting Microsoft’s other mixed-reality ecosystem, the HoloLens 2. Microsoft added a Windows 11 upgrade and other improvements for the business-focused headset earlier this year, according to The Verge.

[…]

Microsoft has made sweeping cuts throughout its VR division, leading to layoffs and the discontinuation of the AltspaceVR app. The company is, however, still developing its proprietary Mesh app that lets co-workers meet in a virtual space without a headset.

Source: Microsoft is nixing its Windows mixed reality platform

Clarified at last: The physics of popping champagne

It sounds like a simple, well-known everyday phenomenon: there is high in a champagne , the stopper is driven outwards by the compressed gas in the bottle and flies away with a powerful pop. But the physics behind this is complicated.

[…]

Using complex computer simulations, it was possible to recalculate the behavior of the stopper and the .

In the process, astonishing phenomena were discovered: a supersonic shock wave is formed and the gas flow can reach more than one and a half times the speed of sound. The results, which appear on the pre-print server arXiv,

[…]

“The champagne cork itself flies away at a comparatively low speed, reaching perhaps 20 meters per second,”

[…]

“However, the gas that flows out of the bottle is much faster,” says Wagner. “It overtakes the cork, flows past it and reaches speeds of up to 400 meters per second.”

That is faster than the speed of sound. The gas jet therefore breaks the shortly after the bottle is opened—and this is accompanied by a shock wave.

[…]

“Then there are jumps in these variables, so-called discontinuities,” says Bernhard Scheichl (TU Vienna & AC2T), Lukas Wagner’s dissertation supervisor. “Then the pressure or velocity in front of the shock wave have a completely different value than just behind it.”

This point in the gas jet, where the pressure changes abruptly, is also known as the “Mach disk.” “Very similar phenomena are also known from or rockets, where the exhaust jet exits the engines at high speed,”

[…]

The Mach disk first forms between the bottle and the cork and then moves back towards the bottle opening.

Temporarily colder than the North Pole

Not only the gas pressure, but also the temperature changes abruptly: “When gas expands, it becomes cooler, as we know from spray cans,” explains Lukas Wagner. This effect is very pronounced in the champagne bottle: the gas can cool down to -130°C at certain points. It can even happen that tiny dry ice crystals are formed from the CO2 that makes the sparkling wine bubble.

“This effect depends on the original temperature of the sparkling wine,” says Lukas Wagner. “Different temperatures lead to dry ice crystals of different sizes, which then scatter light in different ways. This results in variously colored smoke. In principle, you can measure the temperature of the sparkling wine by just looking at the color of the smoke.”

[…]

The audible pop when the bottle is opened is a combination of different effects: Firstly, the cork expands abruptly as soon as it has left the bottle, creating a pressure wave, and secondly, you can hear the shock wave, generated by the supersonic gas jet—very similar to the well-known aeroacoustic phenomenon of the sonic boom.

[…]

More information: Lukas Wagner et al, Simulating the opening of a champagne bottle, arXiv (2023). DOI: 10.48550/arxiv.2312.12271

Source: Clarified at last: The physics of popping champagne

Nuclear fusion net gain experiment replicated three times.

Last year on a December morning, scientists at the National Ignition Facility at the Lawrence Livermore National Laboratory in California (LLNL) managed, in a world first, to produce a nuclear fusion reaction that released more energy than it used, in a process called “ignition.”

Now they say they have successfully replicated ignition at least three times this year, according to a December report from the LLNL. This marks another significant step in what could one day be an important solution to the global climate crisis, driven primarily by the burning of fossil fuels.

NIF's target chamber is where the magic happens -- temperatures of 100 million degrees and pressures extreme enough to compress the target to densities up to 100 times the density of lead are created there.

Source: Nuclear fusion: With 200 lasers and a peppercorn-sized fuel capsule, scientists inch closer to mastering this energy | CNN

AI Act: French govt accused of being influenced by lobbyist with conflict of interests by senators in the pockets of copyright giants. Which surprises no-one watching the AI act process.

French senators criticised the government’s stance in the AI Act negotiations, particularly a lack of copyright protection and the influence of a lobbyist with alleged conflicts of interests, former digital state secretary Cédric O.

The EU AI Act is set to become the world’s first regulation of artificial intelligence. Since the emergence of AI models, such as GPT-4, used by the AI system ChatGPT, EU policymakers have been working on regulating these powerful “foundation” models.

“We know that Cédric O and Mistral influenced the French government’s position regarding the AI regulation bill of the European Commission, attempting to weaken it”, said Catherine Morin-Desailly, a centrist senator at the during the government’s question time on Wednesday (20 December).

“The press reported on the spectacular enrichment of the former digital minister, Cédric O. He entered the company Mistral, where the interests of American companies and investment funds are prominently represented. This financial operation is causing shock within the Intergovernmental Committee on AI you have established, Madam Prime Minister,” she continued.

The accusations were vehemently denied by the incumbent Digital Minister Jean-Noël Barrot: “It is the High Authority for Transparency in Public Life that ensures the absence of conflicts of interest among former government members.”

Moreover, Barrot denied the allegations that France has been the spokesperson of private interests, arguing that the government: “listened to all stakeholders as it is customary and relied solely on the general interest as our guiding principle.”

[…]

Barrot was criticised in a Senate hearing earlier the same day by Pascal Rogard, director of  the Society of Dramatic Authors and Composers, who said that “for the first time, France, through the medium of Jean-Noël Barrot […] has neither supported culture, the creation industry, or copyrights.”

Morin-Desailly then said that she questioned the French stance on AI, which, in her view, is aligned with the position of US big tech companies.

Drawing a parallel from the position of big tech on this copyright AI debate and the Directive on Copyright in the Digital Single Market, Rogard said that since it was enforced he did not “observed any damage to the [big tech]’s business activities.”

[…]

“Trouble was stirred by the renowned Cédric O, who sits on the AI Intergovernmental Committee and still wields a lot of influence, notably with the President of the Republic”, stated Morin-Desailly earlier the same day at the Senate hearing with Rogard. Other sitting Senators joined Morin-Desailly in criticising the French position, and O.

Looking at O’s influential position in the government, the High Authority for Transparency in Public Life decided to forbid O for a three-year time-span to lobby the government or own shares within companies of the tech sector.

Yet, according to Capital, O bought shares through his consulting agency in Mistral AI. Capital revealed O invested €176.1, which is now valued at €23 million, thanks to the company’s last investment round in December.

Moreover, since September, O has at the Committee on generative artificial intelligence to advise the government on its position towards AI.

[…]

 

Source: AI Act: French government accused of being influenced by lobbyist with conflict of interests

The UK Government Should Not Let Copyright Stifle AI Innovation

As Walled Culture has often noted, the process of framing new copyright laws is tilted against the public in multiple ways. And on the rare occasions when a government makes some mild concession to anyone outside the copyright industry, the latter invariably rolls out its highly-effective lobbying machine to fight against such measures. It’s happening again in the world of AI. A post on the Knowledge Rights 21 site points to:

a U-turn by the British Government in February 2023, abandoning its prior commitment to introduce a broad copyright exception for text and data mining that would not have made an artificial distinction between non-commercial and commercial uses. Given that applied research so often bridges these two, treating them differently risks simply chilling innovative knowledge transfer and public institutions working with the private sector.

Unfortunately, and in the face of significant lobbying from the creative industries (something we see also in WashingtonTokyo and Brussels), the UK government moved away from clarifying language to support the development of AI in the UK.

In an attempt to undo some of the damage caused by the UK government’s retrograde move, a broad range of organizations, including Knowledge Rights 21, Creative Commons, and Wikimedia UK, have issued a public statement calling on the UK government to safeguard AI innovation as it draws up its new code of practice on copyright and AI. The statement points out that copyright is a serious threat to the development of AI in the UK, and that:

Whilst questions have arisen in the past which consider copyright implications in relation to new technologies, this is the first time that such debate risks entirely halting the development of a new technology.

The statement’s key point is as follows:

AI relies on analysing large amounts of data. Large-scale machine learning, in particular, must be trained on vast amounts of data in order to function correctly, safely and without bias. Safety is critical, as highlighted in the [recently agreed] Bletchley Declaration. In order to achieve the necessary scale, AI developers need to be able to use the data they have lawful access to, such as data that is made freely available to view on the open web or to which they already have access to by agreement.

Any restriction on the use of such data or disproportionate legal requirements will negatively impact on the development of AI, not only inhibiting the development of large-scale AI in the UK but exacerbating further pre-existing issues caused by unequal access to data.

The organizations behind the statement note that restrictions imposed by copyright would create barriers to entry and raise costs for new entrants. There would also be serious knock-on effects:

Text and data mining techniques are necessary to analyse large volumes of content, often using AI, to detect patterns and generate insights, without needing to manually read everything. Such analysis is regularly needed across all areas of our society and economy, from healthcare to marketing, climate research to finance.

The statement concludes by making a number of recommendations to the UK government in order to ensure that copyright does not stifle the development of AI in the UK. The key ones concern access to the data sets that are vital for training AI and carrying out text and data mining. The organizations ask that the UK’s Code of Practice:

Clarifies that access to broad and varied data sets that are publicly available online remain available for analysis, including text and data mining, without the need for licensing.

Recognises that even without an explicit commercial text and data mining exception, exceptions and limits on copyright law exist that would permit text and data mining for commercial purposes.

Those are pretty minimal demands, but we can be sure that the copyright industry will fight them tooth and nail. For the companies involved, keeping everything involving copyright under their tight control is far more important than nurturing an exciting new technology with potentially huge benefits for everyone.

Source: The UK Government Should Not Let Copyright Stifle AI Innovation | Techdirt

Volkswagen brings back physical buttons for all new cars

Future Volkswagen interiors will all draw inspiration from the ID 2all concept car and bring back physical buttons and controls.

The touchscreen-heavy approach taken for the Mk8 Golf and ID 3 has proven unpopular with customers, prompting a complete about-turn by the company in the way it approaches design.

VW interior designer Darius Watola said the ID2all concept “showed a new approach for all models” and was in response to “recent feedback from customers”.

The new interior has a row of physical (and backlit) buttons for the climate and a rotary controller on the centre tunnel to control the screen on the dashboard above, much like with BMW’s iDrive.

As well as a main central touchscreen for infotainment, there’s also a screen for driving information. Watola said such a display in the driver’s eyeline is crucial for safety.

He said that “customers had a different view in Europe” than in other global markets and wanted “more physical buttons”.

There’s also a revolution in terms of material use, as VW is looking to phase out hard plastics, glue, leather and chrome.

Almost every surface in the ID 2all is soft to the touch, mixing fabrics and Alcantara as part of a sustainability push. There’s limited use of some woods and metals, too.

Watola expressed a desire to see as many features and materials as possible from the concept to the production car in 2025 (which now seems unlikely to take the ID 2 name into showrooms).

However, the goal remains a sub-€25,000 (£22,000) price, which might limit some of the more premium-feeling materials in the cabin.

The concept’s screens can be selected in different themes, including retro graphics from the original Golf, and this feature is expected to make production.

Source: Volkswagen brings back physical buttons for all new cars | Autocar

Very glad that people are starting to realise that touchscreens are not only unsafe but also unhandy, slow and annoying

Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement – a library is a library, whether it’s paper or digital

In 2020, publishers Hachette, HarperCollins, John Wiley and Penguin Random House sued the Internet Archive (IA) for copyright infringement, equating its ‘Open Library’ to a pirate site.

IA’s library is a non-profit operation that scans physical books, which can then be lent out to patrons in an ebook format. Patrons can also borrow books that are scanned and digitized in-house, with technical restrictions that prevent copying.

Staying true to the centuries-old library concept, only one patron at a time can rent a digital copy of a physical book for a limited period.

Mass Copyright Infringement or Fair Use?

Not all rightsholders are happy with IA’s scanning and lending activities. The publishers are not against libraries per se, nor do they object to ebook lending, but ‘authorized’ libraries typically obtain an official license or negotiate specific terms. The Internet Archive has no license.

The publishers see IA’s library as a rogue operation that engages in willful mass copyright infringement, directly damaging their bottom line. As such, they want it taken down permanently.

The Internet Archive wholeheartedly disagreed with the copyright infringement allegations; it offers a vital service to the public, the Archive said, as it built its legal defense on protected fair use.

After weighing the arguments from both sides, New York District Court Judge John Koeltl sided with the publishers. In March, the court granted their motion for summary judgment, which effectively means that the library is indeed liable for copyright infringement.

The judgment and associated permanent injunction effectively barred the library from reproducing or distributing digital copies of the ‘covered books’ without permission from rightsholders. These restrictions were subject to an eventual appeal, which was announced shortly thereafter.

Internet Archive Files Appeal Brief

Late last week, IA filed its opening brief at the Second Circuit Court of Appeals, asking it to reverse the lower court’s judgment. The library argues that the court erred by rejecting its fair use defense.

Whether IA has a fair use defense depends on how the four relevant factors are weighed. According to the lower court, these favor the publishers but the library vehemently disagrees. On the contrary, it believes that its service promotes the creation and sharing of knowledge, which is a core purpose of copyright.

“This Court should reverse and hold that IA’s controlled digital lending is fair use. This practice, like traditional library lending, furthers copyright’s goal of promoting public availability of knowledge without harming authors or publishers,” the brief reads.

A fair use analysis has to weigh the interests of both sides. The lower court did so, but IA argues that it reached the wrong conclusions, failing to properly account for the “tremendous public benefits” controlled digital lending offers.

No Competition

One of the key fair use factors at stake is whether IA’s lending program affects (i.e., threatens) the traditional ebook lending market. IA uses expert witnesses to argue that there’s no financial harm and further argues that its service is substantially different from the ebook licensing market.

IA offers access to digital copies of books, which is similar to licensed libraries. However, the non-profit organization argues that its lending program is not a substitute as it offers a fundamentally different service.

“For example, libraries cannot use ebook licenses to build permanent collections. But they can use licensing to easily change the selection of ebooks they offer to adapt to changing interests,” IA writes.

The licensing models make these libraries more flexible. However, they have to rely on the books offered by commercial aggregators and can’t add these digital copies to their archives.

“Controlled digital lending, by contrast, allows libraries to lend only books from their own permanent collections. They can preserve and lend older editions, maintaining an accurate historical record of books as they were printed.

“They can also provide access that does not depend on what Publishers choose to make available. But libraries must own a copy of each book they lend, so they cannot easily swap one book for another when interest or trends change,” IA adds.

Stakes are High

The arguments highlighted here are just a fraction of the 74-page opening brief, which goes into much more detail and ultimately concludes that the district court’s judgment should be reversed.

In a recent blog post, IA founder Brewster Kahle writes that if the lower court’s verdict stands, books can’t be preserved for future generations in digital form, in the same way that paper versions have been archived for centuries.

“This lawsuit is about more than the Internet Archive; it is about the role of all libraries in our digital age. This lawsuit is an attack on a well-established practice used by hundreds of libraries to provide public access to their collections.

“The disastrous lower court decision in this case holds implications far beyond our organization, shaping the future of all libraries in the United States and unfortunately, around the world,” Kahle concludes.

A copy of the Internet Archive’s opening brief, filed at the Second Circuit Court of Appeals, is available here (pdf)

Source: Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement * TorrentFreak

Google to pay $700 million and make tiny app store changes to settle with 50 states

On December 11th, a jury decided that Google has an illegal monopoly with its Google Play app store, handing Epic Games a win. But Epic wasn’t the only one fighting an antitrust case. All 50 state attorneys general settled a similar lawsuit in September, and we’ve just now learned what Google agreed to give up as a result: $700 million and a handful of minor concessions in the way that Google runs its store in the United States.

The biggest change: Google will need to let developers steer consumers away from the Google Play Store for several years, if this settlement is approved.

You can read the full 68-page settlement for yourself at the bottom of this story, but here’s the TL;DR about what it includes:

  • $700,000,000 from Google in total (roughly 21 days of Google’s operating profit from the app store alone)
  • $629,000,000 of which will go to consumers who may have overpaid for apps or in-app purchases via Google Play after taxes, lawyers’ fees, and so on
  • $70,000,000 of which will go to states to be used as the state AGs see fit
  • $1,000,000 of which is for settlement administration
  • For 7 years, Google will “continue to technically enable Android to allow the installation of third-party apps on Mobile Devices through means other than Google Play”
  • For 5 years, Google will let developers offer an alternative in-app billing system next to Google Play (aka “User Choice Billing”)
  • For 5 years, Google won’t make developers offer their best prices to customers who pick Google Play and Google Play Billing
  • For 4 years, Google won’t make developers ship titles on Google Play at the same time as other stores and with feature parity
  • For 5 years, Google won’t make companies exclusively put Google Play on a phone or its homescreen
  • For 4 years, Google won’t stop OEMs from granting installer rights to preloaded apps
  • For 5 years, Google won’t require its “consent” before an OEM preloads a third-party app store
  • For 4 years, Google will let third-party app stores update apps without requiring user approval
  • For 4 years, Google will let sideloaded app stores use its APIs and “feature splits” to help install apps
  • For 5 years, Google will turn its two sideloading “scare screens” into a single user prompt which will read the equivalent of this agreed-upon language: “Your phone currently isn’t configured to install apps from this source. Granting this source permission to install apps could place your phone and data at risk.”
  • For 5 years, Google will let User Choice Billing participating developers let their users know about better pricing elsewhere and “complete transactions using the developer’s existing web-based billing solution in an embedded webview within its app.”
  • For 6 years, Google will “continue to allow developers to use contact information obtained outside the app or in-app (with User consent) to communicate with Users out-of-app”
  • For 6 years, Google will let consumption only apps (e.g. Netflix, which doesn’t let you pay on device) tell users about better prices elsewhere, without linking to an outside website — example: “Available on our website for $9.99”
  • For 6 years, Google “shall not prohibit developers from disclosing to Users any service or other fees associated with the Google Play or Google Play’s billing system.”

Does that sound like a lot? If you add it all up, it does make for a slightly different Google app store landscape than we’ve experienced over the past decade and change. But not only does every one of these concessions have an expiration date, many of them are arguably not real concessions.

Google argued during the Epic v. Google trial that users were already perfectly able to install third-party apps on their devices through any number of means, and it claimed many of its agreements with developers, OEMs, and carriers did not require them to, for instance, exclusively put Google Play on a phone or its homescreen.

More importantly, several of the most significant sounding changes here are tied to Google’s User Choice Billing program — which is mostly a fake choice, the Epic v. Google trial proved.

We confirmed with Google spokesperson Dan Jackson this evening that User Choice Billing participants are given a discounted rate of just 4 percent off of Google’s fee when users choose their own payment system, and that it won’t change as a result of the settlement. Not only did Google internally find that developers would lose money when users choose the 4 percent rate, but Google also gives companies like Spotify a free ride while apparently charging everyone else.

Perhaps most importantly, Google is reserving the right not to let developers like Netflix link to their own websites to give their users a discounted rate. “Google is not required to allow developers to include links that take a User outside an app distributed through Google Play to make a purchase,” the settlement agreement reads. We are still waiting to find out whether Apple will allow links and/or buttons to alternative payment systems, based on the ruling in Epic v. Apple. But the Google / state AGs settlement suggests that regardless, Google will not be required to allow links.

[…]

Source: Google to pay $700 million and make tiny app store changes to settle with 50 states – The Verge

It’s still baffling that Google lost this case and Apple won it on almost exactly the same grounds, where in Google’s case you can actually sideload apps “legally” (if in an obtuse manner which makes you think you are doing something wrong) and in Apple’s you can’t.

Lamborghini Tests Active Camber and Toe Control for Better Handling

It’s not often that we get to experience a new and completely novel piece of automotive technology for the first time. But that’s what Lamborghini seems to have created with its Active Wheel Carrier, which we have now sampled in prototype form. The system itself is both clever and complex, but the basic purpose is simple: to give real-time control of camber and toe alignment settings while a car is moving.

According to Rouven Mohr, Lamborghini’s chief technical officer, this is one of the final frontiers of vehicle dynamics. Suspension geometry is usually based around a set of compromises, with the loads created by a car in motion inevitably negatively affecting at least some of these. And the alignment settings that are right for the track will cause premature tire wear on the street, which is why many high-performance cars have track-alignment settings and necessitate switching back and forth. Gaining active control in two different planes—toe being the angle of the rotating wheel relative to the direction of travel, and camber its side-on angle relative to the ground—means that many of these compromises can be eliminated. The results, based on our drive in a Lamborghini Huracán development mule at Porsche’s Nardò test track in Italy, are deeply impressive.

The idea itself is not new, and Mohr admits that work on it was being done at fellow VW sibling Audi when he previously worked there. But as well as the hardware required to move the wheel in two planes, the challenge is creating a control system capable of doing so quickly and accurately enough to allow the benefits to be exploited. This is an area in which Lamborghini is leading the way.

The system works exclusively on each of the Huracán prototype’s rear wheels. Active toe control is, in essence, a rear-steering system. We’ve had those before, of course—but this one can also move the wheels between toe-in, where the leading edges point very slightly toward each other, and toe-out, where they do the opposite. In very general terms, toe-out makes a car more reactive and keener to turn, while toe-in gives better high-speed stability.

Active camber control is more revolutionary. Under cornering loads, a car leans over and the suspension compresses, which alters the relationship between the tire tread and the road surface. On something as low and firmly suspended as a Lamborghini supercar, the effect is much slighter than it would be on a 1970s sedan, but it is still significant, as it creates uneven pressure distribution on the tire’s contact patch, which reduces grip. Many performance cars are set up with negative camber (the tire leaned in on its inside edge) to compensate for this, but doing so reduces straight-line traction and increases tire wear.

[…]

two rotating flanges within are what alter the relative angle between the two sides, one controlling camber and the other toe. These are gear-driven by 48-volt electric motors.

[…]

The Active Wheel Carrier can deliver up to 6.6 degrees of toe adjustment in either direction and between 2.5 degrees of positive and 5.5 degrees of negative camber. Both planes can be adjusted at the same time, and the electric motors can do this at up to 60 degrees a second. So even the most extreme change possible—from full toe-in to full toe-out—could be accomplished in under a quarter of a second, although most changes will be much smaller adjustments.

[…]

Starting with the system switched off, and the Evo’s rear suspension in its default position, reveals both understeer on cold tires when driven aggressively plus a rapid transition to oversteer when the rear grip is exceeded. With the Active Wheel Carrier switched on, the Huracán immediately feels more grippy and reactive, keener to change direction—much of which is due to the rear-steering effect of toe adjustment—but also much more stable when being pushed to the edge of adhesion.

[…]

On the handling track, our fastest lap with AWC on was 4.8 seconds faster than with the system off, and while that effect is reduced for more experienced drivers on more familiar tracks, it’s still significant. Even a Lambo pro driver is reportedly 2.8 seconds quicker at Nardò with AWC. That’s on par with the gain by switching from sport tires to street-legal semi-slicks.

The technology would also enable other changes: wider front tires relative to the rears, slightly softer springs to allow more roll (active camber being able to adjust to this), and the intriguing possibly of running different tire compounds front and rear to make maximum benefit from the improved grip. Motors powering the units would also likely be upgraded to work on 400 volts, supplied directly from the plug-in-hybrid battery pack.

While AWC is officially only an experiment at this stage, it seems overwhelmingly likely to play a part in Lamborghini’s future—most likely the Huracán replacement that will debut next year.

Source: Lamborghini Tests Active Camber and Toe Control for Better Handling

Magic: The Gathering Bans the Use of Generative AI in ‘Final’ Products – Wizards of the Coast cancelled themselves

[…] a D&D artist confirmed they had used generative AI programs to finish several pieces of art included in the sourcebook Glory of the Giants—saw Wizards of the Coast publicly ban the use of AI tools in the process of creating art for the venerable TTRPG. Now, the publisher is making that clearer for its other wildly successful game in Magic: The Gathering.

Update 12/19 11.20PM ET: This post has been updated to include clarification from Wizards of the Coast regarding the extent of guidelines for creatives working with Magic and D&D and the use of Generative A.I.

“For 30 years, Magic: The Gathering has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn’t changing,” a new statement shared by Wizards of the Coast on Daily MTG begins. “Our internal guidelines remain the same with regard to artificial intelligence tools: We require artists, writers, and creatives contributing to the Magic TCG to refrain from using AI generative tools to create final Magic products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes Magic great.”

[…]

The Magic statement also comes in the wake of major layoffs at Wizard’s parent company Hasbro. Last week the Wall Street Journal reported that Hasbro plans to lay off 1,100 staff over the next six months across its divisions in a series of cost-cutting measures, with many creatives across Wizard’s D&D and Magic teams confirming they were part of the layoffs. Just this week, the company faced backlash for opening a position for a Digital Artist at Wizards of the Coast in the wake of the job cuts, which totaled roughly a fifth of the Hasbro’s current workforce across all of its divisions.

The job description specifically highlights that the role includes having to “refine and modify illustrative artwork for print and digital media through retouching, color correction, adjusting ink density, re-sizing, cropping, generating clipping paths, and hand-brushing spot plate masks,” as well as “use… digital retouching wizardry to extend cropped characters and adjust visual elements due to legal and art direction requirements,” which critics suggested carried the implication that the role would involve iterating on and polishing art created through generative AI. Whether or not this will be the case considering Wizards’ now-publicized stance remains to be seen.

Source: Magic: The Gathering Formally Bans the Use of Generative AI in ‘Final’ Products

The Gawker company is very anti AI and keeps mentioning backlash. It’s quite funny that if you look at the supposed “backlash” – they are mostly about the lack of quality control around said art – in as much as people thought the points raised were valid at all (source: twitter page with original disclosure). It’s a kind of cancel culture cave-in, where a minority gets to play the role of judge, jury and executioner and the person being cancelled actually… listens the the canceller with no actual evidence of their crime being presented or weighed independently.

Internet Archive Files Opening Brief In Its Appeal Of Book Publishers’ wanton destruction of it

A few weeks ago, publishing giant Penguin Random House (and, yes, I’m still confused why they didn’t call it Random Penguin House after the merger) announced that it was filing a lawsuit (along with many others) against the state of Iowa for its attempt to ban books in school libraries. In its announcement, Penguin Random House talked up the horrors of trying to limit access to books in schools and libraries:

The First Amendment guarantees the right to read and to be read, and for ideas and viewpoints to be exchanged without unreasonable government interference. By limiting students’ access to books, Iowa violates this core principle of the Constitution.

“Our mission of connecting authors and their stories to readers around the world contributes to the free flow of ideas and perspectives that is a hallmark of American Democracy—and we will always stand by it,” says Nihar Malaviya, CEO, Penguin Random House. “We know that not every book we publish will be for every reader, but we must protect the right for all Americans, including students, parents, caregivers, teachers, and librarians to have equitable access to books, and to continue to decide what they read.” 

That’s a very nice sentiment, and I’m glad that Penguin Random House is stating it, but it rings a little hollow, given that Penguin Random House is among the big publishers suing to shut down the Internet Archive, a huge and incredibly useful digital library that actually has the mission that Penguin Random House’s Nihar Malaviya claims is theirs: connecting authors and their stories to readers around the world, while contributing to the free flow of ideas and perspectives that are important to the world. And, believing in the importance of equitable access to books.

So, then, why is Penguin Random House trying to kill the Internet Archive?

While we knew this was coming, last week, the Internet Archive filed its opening brief before the 2nd Circuit appeals court to try to overturn the tragically terrible district court ruling by Judge John Koeltl. The filing is worth reading:

Publishers claim this public service is actually copyright infringement. They ask this Court to elevate form over substance by drawing an artificial line between physical lending and controlled digital lending. But the two are substantively the same, and both serve copyright’s purposes. Traditionally, libraries own print books and can lend each copy to one person at a time, enabling many people to read the same book in succession. Through interlibrary loans, libraries also share books with other libraries’ patrons. Everyone agrees these practices are not copyright infringement.

Controlled digital lending applies the same principles, while creating new means to support education, research, and cultural participation. Under this approach, a library that owns a print book can scan it and lend the digital copy instead of the physical one. Crucially, a library can loan at any one time only the number of print copies it owns, using technological safeguards to prevent copying, restrict access, and limit the length of loan periods.

Lending within these limits aligns digital lending with traditional library lending and fundamentally distinguishes it from simply scanning books and uploading them for anyone to read or redistribute at will. Controlled digital lending serves libraries’ mission of supporting research and education by preserving and enabling access to a digital record of books precisely as they exist in print. And it serves the public by enabling better and more efficient access to library books, e.g., for rural residents with distant libraries, for elderly people and others with mobility or transportation limitations, and for people with disabilities that make holding or reading print books difficult. At the same time, because controlled digital lending is limited by the same principles inherent in traditional lending, its impact on authors and publishers is no different from what they have experienced for as long as libraries have existed.

The filing makes the case that the Internet Archives use of controlled digital lending for eBooks is protected by fair use, leaning heavily on the idea that there is no evidence of harm to the copyright holders:

First, the purpose and character of the use favor fair use because IA’s controlled digital lending is noncommercial, transformative, and justified by copyright’s purposes. IA is a nonprofit charity that offers digital library services for free. Controlled digital lending is transformative because it expands the utility of books by allowing libraries to lend copies they own more efficiently and borrowers to use books in new ways. There is no dispute that libraries can lend the print copy of a book by mail to one person at a time. Controlled digital lending enables libraries to do the same thing via the Internet—still one person at a time. And even if this use were not transformative, it would still be favored under the first factor because it furthers copyright’s ultimate purpose of promoting public access to knowledge—a purpose libraries have served for centuries.

Second, the nature of the copyrighted works is neutral because the works are a mix of fiction and non-fiction and all are published.

Third, the amount of work copied is also neutral because copying the entire book is necessary: borrowing a book from a library requires access to all of it.

Fourth, IA’s lending does not harm Publishers’ markets. Controlled digital lending is not a substitute for Publishers’ ebook licenses because it offers a fundamentally different service. It enables libraries to efficiently lend books they own, while ebook licenses allow libraries to provide readers temporary access through commercial aggregators to whatever selection of books Publishers choose to make available, whether the library owns a copy or not. Two experts analyzed the available data and concluded that IA’s lending does not harm Publishers’ sales or ebook licensing. Publishers’ expert offered no contrary empirical evidence.

Weighing the fair use factors in light of copyright’s purposes, the use here is fair. In concluding otherwise, the district court misunderstood controlled digital lending, conflating it with posting an ebook online for anyone to access at any time. The court failed to grasp the key feature of controlled digital lending: the digital copy is available only to the one person entitled to borrow it at a time, just like lending a print book. This error tainted the district court’s analysis of all the factors, particularly the first and fourth. The court compounded that error by failing to weigh the factors in light of the purposes of copyright.

Not surprisingly, I agree with the Internet Archives’ arguments here, but these kinds of cases are always a challenge. Judges have this weird view of copyright law, that they sometimes ignore the actual law, the purpose of the law, and the constitutional underpinnings of the law, and insist that the purpose of copyright law is to award the copyright holders as much money and control as possible.

That’s not how copyright is supposed to work, but judges sometimes seem to forget that. Hopefully, the 2nd Circuit does not. The 2nd Circuit, historically, has been pretty good on fair use issues, so hopefully that holds in this case as well.

The full brief is (not surprisingly) quite well done and detailed and worth reading.

And now we’ll get to see whether or not Penguin Random House really supports “the free flow of ideas” or not…

Source: Internet Archive Files Opening Brief In Its Appeal Of Book Publishers’ Win | Techdirt

People discussing Assisted Dying (Euthanasia) in the UK – apparently it’s still illegal there

Dame Esther Rantzen says a free vote on assisted dying would be top of the agenda if she were PM for a day.

“I think it’s important that the law catches up with what the country wants,” the veteran broadcaster told Radio 4’s Today podcast.

Earlier this year, the 83-year-old announced she had been diagnosed with stage four lung cancer.

Dame Esther told the BBC she is currently undergoing a “miracle” treatment to combat the disease.

However, if her next scan shows the medication is not working “I might buzz off to Zurich”, where assisted dying is legal and she has joined the Dignitas clinic, she said.

She said this decision could be driven in part by her wish that her family’s “last memories of me” are not “painful because if you watch someone you love having a bad death, that memory obliterates all the happy times”.

Source: Dame Esther Rantzen: ‘If I were PM, we would vote on assisted dying’ – BBC News

What civilised country doesn’t allow euthanasia? It’s like a 1970s country where being gay is still illegal. Climb up out of your Brexit inflicted stone age, Britain!

Research team discovers how to sabotage antibiotic-resistant ‘superbugs’

The typical strategy when treating microbial infections is to blast the pathogen with an , which works by getting inside the harmful cell and killing it. This is not as easy as it sounds, because any new antibiotic needs to be both water soluble, so that it can travel easily through the bloodstream, and oily, in order to cross the pathogenic cell’s first line of defense, the cellular membrane. Water and oil, of course, don’t mix, and it’s difficult to design a drug that has enough of both characteristics to be effective.

The difficulty doesn’t stop there, either, because pathogenic cells have developed something called an “efflux pump,” that can recognize antibiotics and then safely excrete them from the cell, where they can’t do any harm. If the antibiotic can’t overcome the efflux pump and kill the cell, then the pathogen “remembers” what that specific antibiotic looks like and develops additional efflux pumps to efficiently handle it—in effect, becoming resistant to that particular antibiotic.

One path forward is to find a new antibiotic, or combinations of them, and try to stay one step ahead of the superbugs.

“Or, we can shift our strategy,” says Alejandro Heuck, associate professor of biochemistry and molecular biology at UMass Amherst and the paper’s senior author.

[…]

Like the pathogenic cell, host cells also have thick, difficult-to-penetrate cell walls. In order to breach them, pathogens have developed a syringe-like machine that first secretes two proteins, known as PopD and PopB. Neither PopD nor PopB individually can breach the cell wall, but the two proteins together can create a “translocon”—the cellular equivalent of a tunnel through the cell membrane. Once the tunnel is established, the pathogenic cell can inject other proteins that do the work of infecting the host.

This entire process is called the Type 3 secretion system—and none of it works without both PopB and PopD. “If we don’t try to kill the pathogen,” says Heuck, “then there’s no chance for it to develop resistance. We’re just sabotaging its machine. The pathogen is still alive; it’s just ineffective, and the host has time to use its natural defenses to get rid of the pathogen.”

[..]

Heuck and his colleagues realized that an enzyme class called the luciferases—similar to the ones that cause lightning bugs to glow at night—could be used as a tracer. They split the enzyme into two halves. One half went into the PopD/PopB proteins, and the other half was engineered into a host cell.

These engineered proteins and hosts can be flooded with different chemical compounds. If the host cell suddenly lights up, that means that PopD/PopB successfully breached the cellular wall, reuniting the two halves of the luciferase, causing them to glow. But if the cells stay dark? “Then we know which molecules break the translocon,” says Heuck.

Heuck is quick to point out that his team’s research has not only obvious applications in the world of pharmaceuticals and public health, but that it also advances our understanding of exactly how microbes infect healthy cells. “We wanted to study how worked,” he says, “and then suddenly we discovered that our findings can help solve a public-health problem.”

This research is published in the journal ACS Infectious Diseases.

More information: Hanling Guo et al, Cell-Based Assay to Determine Type 3 Secretion System Translocon Assembly in Pseudomonas aeruginosa Using Split Luciferase, ACS Infectious Diseases (2023). DOI: 10.1021/acsinfecdis.3c00482

Source: Research team discovers how to sabotage antibiotic-resistant ‘superbugs’

AI trained on millions of life stories can predict risk of early death

An artificial intelligence trained on personal data covering the entire population of Denmark can predict people’s chances of dying more accurately than any existing model, even those used in the insurance industry. The researchers behind the technology say it could also have a positive impact in early prediction of social and health problems – but must be kept out of the hands of big business.

Sune Lehmann Jørgensen at the Technical University of Denmark and his colleagues used a rich dataset from Denmark that covers education, visits to doctors and hospitals, any resulting diagnoses, income and occupation for 6 million people from 2008 to 2020.

They converted this dataset into words that could be used to train a large language model, the same technology that powers AI apps such as ChatGPT. These models work by looking at a series of words and determining which word is statistically most likely to come next, based on vast amounts of examples. In a similar way, the researchers’ Life2vec model can look at a series of life events that form a person’s history and determine what is most likely to happen next.

In experiments, Life2vec was trained on all but the last four years of the data, which was held back for testing. The researchers took data on a group of people aged 35 to 65, half of whom died between 2016 and 2020, and asked Life2vec to predict which who lived and who died. It was 11 per cent more accurate than any existing AI model or the actuarial life tables used to price life insurance policies in the finance industry.

The model was also able to predict the results of a personality test in a subset of the population more accurately than AI models trained specifically to do the job.

Jørgensen believes that the model has consumed enough data that it is likely to be able to shed light on a wide range of health and social topics. This means it could be used to predict health issues and catch them early, or by governments to reduce inequality. But he stresses that it could also be used by companies in a harmful way.

“Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this this burden,” says Jørgensen.

But technologies like this are already out there, he says. “They’re likely being used on us already by big tech companies that have tonnes of data about us, and they’re using it to make predictions about us.”

Source: AI trained on millions of life stories can predict risk of early death | New Scientist

Internet Architecture Board hits out at US, EU, UK client-side scanning (spying on everything on your phone and pc all the time) plans – to save (heard it before?) kids

[…]

Apple brought widespread attention to this so-called client-side scanning in August 2021 when it announced plans to examine photos on iPhones and iPads before they were synced to iCloud, as a safeguard against the distribution of child sexual abuse material (CSAM). Under that plan, if someone’s files were deemed to be CSAM, the user could lose their iCloud account and be reported to the cops.

As the name suggests, client-side scanning involves software on a phone or some other device automatically analyzing files for unlawful photos and other content, and then performing some action – such as flagging or removing the documents or reporting them to the authorities. At issue, primarily, is the loss of privacy from the identification process – how will that work with strong encryption, and do the files need to be shared with an outside service? Then there’s the reporting process – how accurate is it, is there any human intervention, and what happens if your gadget wrongly fingers you to the cops?

The iGiant’s plan was pilloried by advocacy organizations and by customers on technical and privacy grounds. Ultimately Apple abandoned the effort and went ahead with offering iCloud encryption – a level of privacy that prompted political pushback at other tech titans.

Proposals for client-side scanning … mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the ‘net glued together –thinks that’s a bad idea.

“A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression,” the IAB declared in a statement just before the weekend.

[…]

Specifically, the IAB cites Europe’s planned “Regulation laying down rules to prevent and combat child sexual abuse” (2022/0155(COD)), the UK Online Safety Act of 2023, and the US Earn-It Act, all of which contemplate regulatory regimes that have the potential to require the decryption of encrypted content in support of mandated surveillance.

The administrative body acknowledges the social harm done through the distribution of illegal content on the internet and the need to protect internet users. But it contends indiscriminate surveillance is not the answer.

The UK has already passed its Online Safety Act legislation, which authorizes telecom watchdog Ofcom to demand decryption of communications on grounds of child safety – though government officials have admitted that’s not technically feasible at the moment.

Europe, under fire for concealing those who have consulted on client-side scanning, and the US appears to be heading down a similar path.

For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring.

“The IAB opposes technologies that foster surveillance as they weaken the user’s expectations of private communication which decreases the trust in the internet as the core communication platform of today’s society,” the organization wrote. “Mandatory client-side scanning creates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.”

[…]

Source: Internet Architecture Board hits out at client-side scanning • The Register

As soon as they take away privacy to save kids, you know they will expand the remit as governments have always done. The fact is that mass surveillance is not particularly effective, even with AI, except in making people feel watched and thus altering their behaviour. This feeling of always being spied upon is much much worse for whole generations of children than the tiny amount of sexual predators that may actually be caught.

How To Build Your Own Custom ChatGPT Bot

There’s something new and powerful for ChatGPT users to play around with: Custom GPTs. These bespoke bots are essentially more focused, more specific versions of the main ChatGPT model, enabling you to build something for a particular purpose without using any coding or advanced knowledge of artificial intelligence.

The name GPT stands for Generative Pre-trained Transformer, as it does in ChatGPT. Generative is the ability to produce new content outside of what an AI was trained on. Pre-trained indicates that it’s already been trained on a significant amount of material, and Transformer is a type of AI architecture adept at understanding language.

You might already be familiar with using prompts to style the responses of ChatGPT: You can tell it to answer using simple language, for example, or to talk to you as if it were an alien from another world. GPTs build on this idea, enabling you to create a bot with a specific personality.

You can build a GPT using a question-and-answer routine.
You can build a GPT using a question-and-answer routine.
Screenshot: ChatGPT

What’s more, you can upload your own material to add to your GPT’s knowledge banks—it might be samples of your own writing, for instance, or copies of reports produced by your company. GPTs will always have access to the data you upload to them and be able to browse the web at large.

GPTs are exclusive to Plus and Enterprise users, though everyone should get access soon. OpenAI plans to open a GPT store where you can sell your AI bot creations if you think others will find them useful, too. Think of an app store of sorts but for bespoke AI bots.

“GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others,” explains OpenAI in a blog post. “For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.”

Getting started with GPT building

Assuming you have a Plus or Enterprise account, click Explore on the left of the web interface to see some example GPTs: There’s one to help you with your creative writing, for example, and one to produce a particular style of digital painting. When you’re ready to start building your own, click Create a GPT at the top.

There are two tabs to swap between: Create for building a GPT through a question-and-answer routine and Configure for more deliberate GPT production. If you’re just getting started, it’s best to stick with Create, as it’s a more user-friendly option and takes you step-by-step through the process.

Respond to the prompts of the GPT Builder bot to explain what you want the new GPT to be able to do: Explain certain concepts, give advice in specific areas, generate particular kinds of text or images, or whatever it is. You’ll be asked to give the GPT a name and choose an image for it, though you’ll get suggestions for these, too.

You’re able to test out your GPT as you build it.
You’re able to test out your GPT as you build it.
Screenshot: ChatGPT

As you answer the prompts from the builder, the GPT will begin to take form in the preview pane on the right—together with some example inputs that you might want to give to it. You might be asked about specific areas of expertise that you want the bot to have and the sorts of answers you want the bot to give in terms of their length and complexity. The building process will vary though, depending on the GPT you’re creating.

After you’ve worked through the basics of making a GPT, you can try it out and switch to the Configure tab to add more detail and depth. You’ll see that your responses so far have been used to craft a set of instructions for the GPT about its identity and how it should answer your questions. Some conversation starters will also be provided.

You can edit these instructions if you need to and click Upload files to add to the GPT’s knowledge banks (handy if you want it to answer questions about particular documents or topics, for instance). Most common document formats, including PDFs and Word files, seem to be supported, though there’s no official list of supported file types.

GPTs can be kept to yourself or shared with others.
GPTs can be kept to yourself or shared with others.
Screenshot: ChatGPT

The checkboxes at the bottom of the Configure tab let you choose whether or not the GPT has access to web browsing, DALL-E image creation, and code interpretation capabilities, so make your choices accordingly. If you add any of these capabilities, they’ll be called upon as and when needed—there’s no need to specifically ask for them to be used, though you can if you want.

When your GPT is working the way you want it to, click the Save button in the top right corner. You can choose to keep it to yourself or make it available to share with others. After you click on Confirm, you’ll be able to access the new GPT from the left-hand navigation pane in the ChatGPT interface on the web.

GPTs are ideal if you find yourself often asking ChatGPT to complete tasks in the same way or cover the same topics—whether that’s market research or recipe ideas. The GPTs you create are available whenever you need them, alongside access to the main ChatGPT engine, which you can continue to tweak and customize as needed.

Source: How To Build Your Own Custom ChatGPT Bot