I use as many ad-blocking programs as possible, but no matter how many I install, real-life advertising is still there, grabbing my attention when I’m just trying to go for a walk. Thankfully, there may be a solution on the horizon. Software engineer Stijn Spanhove recently posted a concept video showing what real-time, real-life ad-blocking looks like on a pair of Snap Spectacles, and I really want it. Check it out:
The idea is that the AI in your smart glasses recognizes advertisements in your visual field and “edits them out’ in real time, sparing you from ever seeing what they want you to see.
While Spanhove’s video shows a red block over the offending ads, you could conceivably cover that Wendy’s ad with anything you want—an abstract painting, a photo of your family, an ad for Arby’s, etc.
he Supreme Court this morning took a chainsaw to the First Amendment on the internet, and the impact is going to be felt for decades going forward. In the FSC v. Paxton case, the Court upheld the very problematic 5th Circuit ruling that age verification online is acceptable under the First Amendment, despite multiple earlier Supreme Court rulings that said the opposite.
Justice Thomas wrote the 6-3 majority opinion, with Justice Kagan writing the dissent (joined by Sotomayor and Jackson). The practical effect: states can now force websites to collect government IDs from anyone wanting to view adult content, creating a massive chilling effect on protected speech and opening the door to much broader online speech restrictions.
Thomas accomplished this by pulling off some remarkable doctrinal sleight of hand. He ignored the Court’s own precedents in Ashcroft v. ACLU by pretending online age verification is just like checking ID at a brick-and-mortar store (it’s not), applied a weaker “intermediate scrutiny” standard instead of the “strict scrutiny” that content-based speech restrictions normally require, and—most audaciously—invented an entirely new category of “partially protected” speech that conveniently removes First Amendment protections exactly when the government wants to burden them. As Justice Kagan’s scathing dissent makes clear, this is constitutional law by result-oriented reasoning, not principled analysis.
[…]
The real danger here isn’t just Texas’s age verification law—it’s that Thomas has handed every state legislature a roadmap for circumventing the First Amendment online. His reasoning that “the internet has changed” and that intermediate scrutiny suffices for content-based restrictions will be cited in countless future cases targeting online speech. Expect age verification requirements to be attempted for social media platforms (protecting kids from “harmful” political content), for news sites (preventing minors from accessing “disturbing” coverage), and for any online speech that makes moral authorities uncomfortable.
And yes, to be clear, the majority opinion seeks to limit this just to content deemed “obscene” to avoid such problems, but it’s written so broadly as to at least open up challenges along these lines.
Thomas’s invention of “partially protected” speech, that somehow means you can burden those for which it is protected, is particularly insidious because it’s infinitely expandable. Any time the government wants to burden speech, it can simply argue that the burden is built into the right itself—making First Amendment protection vanish exactly when it’s needed most. This isn’t constitutional interpretation; it’s constitutional gerrymandering.
The conservative justices may think they’re just protecting children from pornography, but they’ve actually written a permission slip for the regulatory state to try to control online expression.
[…]
By creating his “partially protected” speech doctrine and blessing age verification burdens that would have been unthinkable a decade ago, Thomas has essentially told state governments: find the right procedural mechanism, and you can burden any online speech you dislike. Today it’s pornography. Tomorrow it will be political content that legislators deem “harmful to minors,” news coverage that might “disturb” children, or social media discussions that don’t align with official viewpoints.
The conservatives may have gotten their victory against online adult content, but they’ve handed every future administration—federal and state—a blueprint for dismantling digital free speech. They were so scared of nudity that they broke the Constitution. The rest of us will be living with the consequences for decades.
The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.
The Danish government said on Thursday it would strengthen protection against digital imitations of people’s identities with what it believes to be the first law of its kind in Europe.
[…]
It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.
[…]
“In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.”
He added: “Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.”
[…]
The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent.
It will also cover “realistic, digitally generated imitations” of an artist’s performance without consent. Violation of the proposed rules could result in compensation for those affected.
The government said the new rules would not affect parodies and satire, which would still be permitted.
An interesting take on it. I am curious how this goes – defending copyright can be a very detailed thing, so what happens if someone alters someone else’s eyebrows in the deepfake by making them a mm longer? Does that invalidate the whole copyright?
A federal judge sided with Meta on Wednesday in a lawsuit brought against the company by 13 book authors, including Sarah Silverman, that alleged the company had illegally trained its AI models on their copyrighted works.
Federal Judge Vince Chhabria issued a summary judgment — meaning the judge was able to decide on the case without sending it to a jury — in favor of Meta, finding that the company’s training of AI models on copyrighted books in this case fell under the “fair use” doctrine of copyright law and thus was legal.
The decision comes just a few days after a federal judge sided with Anthropic in a similar lawsuit. Together, these cases are shaping up to be a win for the tech industry, which has spent years in legal battles with media companies arguing that training AI models on copyrighted works is fair use.
However, these decisions aren’t the sweeping wins some companies hoped for — both judges noted that their cases were limited in scope.
Judge Chhabria made clear that this decision does not mean that all AI model training on copyrighted works is legal, but rather that the plaintiffs in this case “made the wrong arguments” and failed to develop sufficient evidence in support of the right ones.
“This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” Judge Chhabria said in his decision. Later, he said, “In cases involving uses like Meta’s, it seems like the plaintiffs will often win, at least where those cases have better-developed records on the market effects of the defendant’s use.”
Judge Chhabria ruled that Meta’s use of copyrighted works in this case was transformative — meaning the company’s AI models did not merely reproduce the authors’ books.
Furthermore, the plaintiffs failed to convince the judge that Meta’s copying of the books harmed the market for those authors, which is a key factor in determining whether copyright law has been violated.
“The plaintiffs presented no meaningful evidence on market dilution at all,” said Judge Chhabria.
I have covered the Silverman et al case before here several times and it was retarded on all levels, which is why it was thrown out against OpenAI. Most importantly is that this judge and the judge in the Anthropic case rule that AI’s use of ingested works is transformative and not a copy. Just like when you read a book, you can recall bits of it for inspiration, but you don’t (well, most people don’t!) remember word for word what you read.
[…]As highlighted in a Reddit post, Google recently sent out an email to some Android users informing them that Gemini will now be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone whether your Gemini Apps Activity is on or off.” That change, according to the email, will take place on July 7. In short, that sounds—at least on the surface—like whether you have opted in or out, Gemini has access to all of those very critical apps on your device.
Google continues in the email, which was screenshotted by Android Police, by stating that “if you don’t want to use these features, you can turn them off in Apps settings page,” but doesn’t elaborate on where to find that page or what exactly will be disabled if you avail yourself of that setting option. Notably, when App Activity is enabled, Google stores information on your Gemini usage (inputs and responses, for example) for up to 72 hours, and some of that data may actually be reviewed by a human. That’s all to say that enabling Gemini access to those critical apps by default may be a bridge too far for some who are worried about protecting their privacy or wary of AI in general.
[…]
The worst part is, if we’re not careful, all of that information might end up being collected without our consent, or at least without our knowledge. I don’t know about you, but as much as I want AI to order me a cab, I think keeping my text messages private is a higher priority.
A federal judge in San Francisco ruled late on Monday that Anthropic’s use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.
Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made “fair use”
, opens new tab of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.
Alsup also said, however, that Anthropic’s copying and storage of more than 7 million pirated books in a “central library” infringed the authors’ copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement.
U.S. copyright law says that willful copyright infringement can justify statutory damages of up to $150,000 per work.
An Anthropic spokesperson said the company was pleased that the court recognized its AI training was “transformative” and “consistent with copyright’s purpose in enabling creativity and fostering scientific progress.”
The writers filed the proposed class action against Anthropic last year, arguing that the company, which is backed by Amazon (AMZN.O) and Alphabet (GOOGL.O), used pirated versions of their books without permission or compensation to teach Claude to respond to human prompts.
The proposed class action is one of several lawsuits brought by authors, news outlets and other copyright owners against companies including OpenAI, Microsoft (MSFT.O) and Meta Platforms (META.O) over their AI training.
The doctrine of fair use allows the use of copyrighted works without the copyright owner’s permission in some circumstances.
Fair use is a key legal defense for the tech companies, and Alsup’s decision is the first to address it in the context of generative AI.
AI companies argue their systems make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry.
Anthropic told the court that it made fair use of the books and that U.S. copyright law “not only allows, but encourages” its AI training because it promotes human creativity. The company said its system copied the books to “study Plaintiffs’ writing, extract uncopyrightable information from it, and use what it learned to create revolutionary technology.”
Copyright owners say that AI companies are unlawfully copying their work to generate competing content that threatens their livelihoods.
Alsup agreed with Anthropic on Monday that its training was “exceedingly transformative.”
“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Alsup said.
Alsup also said, however, that Anthropic violated the authors’ rights by saving pirated copies of their books as part of a “central library of all the books in the world” that would not necessarily be used for AI training.
Anthropic and other prominent AI companies including OpenAI and Meta Platforms have been accused of downloading pirated digital copies of millions of books to train their systems.
Anthropic had told Alsup in a court filing that the source of its books was irrelevant to fair use.
“This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup said on Monday.
This makes sense to me. The training itself is much like any person reading a book and using that as inspiration. It does not copy it. And any reader should have bought (or borrowed) the book. Why Anthropic apparently used pirated copies and why they kept a seperate library of the books is beyond me .
An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to “indefinitely” retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user’s request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.
The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT “from time to time,” occasionally sending OpenAI “highly sensitive personal and commercial information in the course of using the service.” In his filing, Hunt alleged that Wang’s preservation order created a “nationwide mass surveillance program” affecting and potentially harming “all ChatGPT users,” who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs “inherently reveal, and often explicitly restate, the input questions or topics input.”
Hunt claimed that he only learned that ChatGPT was retaining this information — despite policies specifying they would not — by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court’s decision and proposed a motion to vacate the order that said Wang’s “order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users.” […] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker’s concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.
Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI’s fight to block the order this week fail. But that’s likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data — even if it’s never shared with news organizations — could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users’ privacy if other concerns — like “financial costs of the case, desire for a quick resolution, and avoiding reputational damage” — are deemed more important, his filing said.
NB you would be pretty dense to think that anything you put into an externally hosted GPT would not be kept and used by that company for AI training and other analysis, so it’s not surprising that this data could be (and will be) requisitioned by other corporations and of course governments.
Makers of air fryers, smart speakers, fertility trackers and smart TVs have been told to respect people’s rights to privacy by the UK Information Commissioner’s Office (ICO).
People have reported feeling powerless to control how data is gathered, used and shared in their own homes and on their bodies.
After reports of air fryers designed to listen in to their surroundings and public concerns that digitised devices collect an excessive amount of personal information, the data protection regulator has issued its first guidance on how people’s personal information should be handled.
Is your air fryer spying on you? Concerns over ‘excessive’ surveillance in smart devices
It is demanding that manufacturers and data handlers ensure data security, are transparent with consumers and ensure the regular deletion of collected information.
Stephen Almond, the executive director for regulatory risk at the ICO, said: “Smart products know a lot about us: who we live with, what music we like, what medication we are taking and much more.
“They are designed to make our lives easier, but that doesn’t mean they should be collecting an excessive amount of information … we shouldn’t have to choose between enjoying the benefits of smart products and our own privacy.
“We all rightly have a greater expectation of privacy in our own homes, so we must be able to trust smart products are respecting our privacy, using our personal information responsibly and only in ways we would expect.”
The new guidance cites a wide range of devices that are broadly known as part of the “internet of things”, which collect data that needs to be carefully handled. These include smart fertility trackers that record the dates of their users’ periods and body temperature, send it back to the manufacturer’s servers and make an inference about fertile days based on this information.
Smart speakers that listen in not only to their owner but also to other members of their family and visitors to their home should be designed so users can configure product settings to minimise the personal information they collect.
Many porn sites, including Pornhub, YouPorn, and RedTube, all went dark earlier this month in France to protest a new age verification law that would have required the websites to collect ID from users. But those sites went back online Friday after a new ruling from a French court suspended enforcement of the law until it can be determined whether it conflicts with existing European Union rules, according to France24.
Aylo, the company that owns Pornhub, has previously said that requiring age verification “creates an unacceptable security risk” and warned that setting up that kind of process makes people vulnerable to hacks and leaks of sensitive information. The French law would’ve required Aylo to verify user ages with a government-issued ID or a credit card.
[…]
Age verification laws for porn websites has been a controversial issue globally, with the U.S. seeing a dramatic uptick in states passing such laws in recent years. Nineteen states now have laws that require age verification for porn sites, meaning that anyone who wants to access Pornhub in places like Florida and Texas need to use a VPN.
Australia recently passed a law banning social media use for anyone under the age of 16, regardless of explicit content, which is currently making its way through the expected challenges. The law had a 12-month buffer built in to allow the country’s internet safety regulator to figure out how to implement it. Tech giants like Meta and TikTok were dealt a blow on Friday after the commission issued a report stating that age verification “can be private, robust and effective,” though trials are ongoing about how to best make the law work, according to ABC News in Australia.
Updated July 14:The Internet-Wide Day of Action to Save Net Neutrality on July 12 enjoyed a healthy turnout.Thousands of companies and some visible tech celebrities united against the FCC proposal called Restoring Internet Freedom, by which the new FCC chairman Ajit Pai hopes to loosen regulations for the ISPs and telecom companies that provide Internet service nationwide. The public has until mid-August to give comments to the FCC.
The protests took many forms. Organizations including the American Civil Liberties Union, Reddit, The Nation, and Greenpeace placed website blockers to imitate what would happen if the FCC loosened regulations. Other companies participating online displayed images on their sites that simulated a slowed-down Internet, or demanded extra money for faster access.
Haley Velasco/IDGFor the July 12 Internet-Wide Day of Action advocating net neutrality, sites including The Nation displayed images showing people what the web would be like if corporations operated it for a profit.
Tech giant Google published a blog post in defense of net neutrality. “Today’s open internet ensures that both new and established services, whether offered by an established internet company like Google, a broadband provider or a small startup, have the same ability to reach users on an equal playing field.”
Melissa Riofrio/IDGFacebook COO Sheryl Sandberg posted to her page about net neutrality as part of the July 12 Internet-Wide Day of Action.
Facebook joined in with Sheryl Sandberg posting her message on Facebook as well as Facebook CEO Mark Zuckerberg.“Keeping the internet open for everyone is crucial. Not only does it promote innovation, but it lets people access information that can change their lives and gives voice to those who might not otherwise be heard,” Sandberg said.
In Washington, FCC Commissioner Mignon Clyburn said in a statement that she supports a free and open internet. “Its benefits can be felt across our economy and around the globe,” she said. “That is why I am excited that on this day consumers, entrepreneurs and companies of all sizes, including broadband providers and internet startups, are speaking out with a unified voice in favor of strong net neutrality rules grounded in Title II. Knowing that the arc of success is bent in our favor and we are on the right side of history, I remain committed to doing everything I can to protect the most empowering and inclusive platform of our time.”
Sen. Ron Wyden, D-Ore., and Sen. Brian Schatz, D-Hawaii, wrote a letter to the FCC Tuesday – one day early — to make sure the FCC’s system was ready to withstand a cyberattack, as well as the large volume of calls expected Wednesday.
What led up to the protest
The July 12 Internet-Wide Day of Action strove to highlight how the web would look if telecom companies were allowed to control it for profit. Organizing groups such as Fight for the Future, Free Press Action Fund, and Demand Progress want their actions to call attention to the potential impact on everyday users, such as having to pay for faster internet access.
Where net neutrality stands: Under the Open Internet Order enacted by the FCC in 2015, internet service providers cannot block access to content on websites or apps, interfere with loading speeds, or provide favoritism to those who pay extra. However, FCC Chairman Ajit Pai, selected by President Trump in January, has been advocating a completely open internet, where the ISPs could control access or charge fees without regulation. A Senate bill that would relax regulations, called Restoring Internet Freedom (S.993), was introduced in May and was referred to the Committee on Commerce, Science, and Transportation.
What this protest is for: The July 12 protest, which organizers are calling the Internet-Wide Day of Action to Save Net Neutrality, will fight for free speech on the internet under Title II of FCC’s Communications Act of 1934. On that date, websites and apps that support net neutrality will display alerts to mimic what could happen if the FCC rolled back the rules.
Who will come together for the protest: More than 180 companies including Amazon, Twitter, Etsy, OkCupid, and Vimeo, along with advocacy groups such as the ACLU, Change.org, and Greenpeace, will join the protest and urge their users and followers to do the same.
Where the protest will take place: Sites that support net neutrality will call attention to their cause by simulating what users would experience if telecom companies were allowed to control web access. Examples will include a simulated “spinning wheel of death” (when a webpage or app won’t load), blocked notifications, and requests to upgrade to paid plans. Organizers are also calling on supporters to stage in-person protests at congressional offices and post protest selfies on social media with the tag #savethenet.
Who opposes the protest: FCC Chairman Ajit Pai and large telecom companies, such as Verizon and Comcast, want to relax net neutrality rules. Some claim that an unregulated internet will allow for more competition in the marketplace, as well as oversight of privacy and security measures.
Why this protest matters: The July 12 protest is projected to be one of the largest digital protests ever planned, with more than 50,000 people, sites, and organizations participating. If successful, it would be reminiscent of a 2012 blackout for freedom of speech on the internet to protest the Stop Online Piracy Act and the PROTECT IP Act, and an internet slowdown in 2014 to demand discussions about net neutrality.
In less than three months’ time, almost no civil servant, police officer or judge in Schleswig-Holstein will be using any of Microsoft’s ubiquitous programs at work.
Instead, the northern state will turn to open-source software to “take back control” over data storage and ensure “digital sovereignty”, its digitalisation minister, Dirk Schroedter, told AFP.
“We’re done with Teams!” he said, referring to Microsoft’s messaging and collaboration tool and speaking on a video call — via an open-source German program, of course.
The radical switch-over affects half of Schleswig-Holstein’s 60,000 public servants, with 30,000 or so teachers due to follow suit in coming years.
The state’s shift towards open-source software began last year.
The current first phase involves ending the use of Word and Excel software, which are being replaced by LibreOffice, while Open-Xchange is taking the place of Outlook for emails and calendars.
Over the next few years, there will also be a switch to the Linux operating system in order to complete the move away from Windows.
[…]
“The geopolitical developments of the past few months have strengthened interest in the path that we’ve taken,” said Schroedter, adding that he had received requests for advice from across the world.
“The war in Ukraine revealed our energy dependencies, and now we see there are also digital dependencies,” he said.
The government in Schleswig-Holstein is also planning to shift the storage of its data to a cloud system not under the control of Microsoft, said Schroedter.
In an interview with Danish broadsheet newspaper Politiken [Danish], Caroline Olsen, the country’s Minister for Digital Affairs, said she is planning to lead by example and start removing Microsoft software and tools from the ministry. The minister told Jutland’s Nordyske [🇩🇰 Danish, but not paywalled] the plan is that half the staff’s computers – including her own – would have LibreOffice in place of Microsoft Office 365 in the first month, with the goal of total replacement by the end of the year.
Given that earlier this year, US President Donald Trump was making noises about taking over Greenland, an autonomous territory of Denmark, it seems entirely understandable for the country to take a markedly increased interest in digital sovereignty – as Danish Ruby guru David Heinemeier Hansson explained just a week ago.
[…]
The more pressing problem tends to be groupware – specifically, the dynamic duo of Outlook and Exchange, as Bert Hubert told The Register earlier this year. Several older versions go end-of-life soon, along with Windows 10. Modernizing is expensive, which makes migrating look more appealing.
A primary alternative to Redmond, of course, is Mountain View. Google’s offerings can do the job. In December 2021, the Nordic Choice hotel group was hit by Conti ransomware, but rather than pay to regain access to its machines, it switched to ChromeOS.
The thing is, this is jumping from one US-based option to another. That’s why France rejected both a few years ago, and we reported on renewed EU interest early the following year. Such things may be why French SaaS groupware offering La Suite numérique is looking quite complete and polished these days.
EU organizations can host their own cloud office suite thanks to Collabora’s CODE, which runs LibreOffice on an organization’s own webservers – easing deployment and OS migration.
Not content to wait for open letters to influence the European Commission, Dutch parliamentarians have taken matters into their own hands by passing eight motions urging the government to ditch US-made tech for homegrown alternatives.
With each IT service our government moves to American tech giants, we become dumber and weaker…
The motions were submitted and all passed yesterday during a discussion in the Netherlands’ House of Representatives on concerns about government data being shipped overseas. While varied, they all center on the theme of calling on the government to replace software and hardware made by US tech companies, acquire new contracts with Dutch companies who offer similar services, and generally safeguard the country’s digital sovereignty.
“With each IT service our government moves to American tech giants, we become dumber and weaker,” Dutch MP Barbara Kathmann, author of four of the motions, told The Register. “If we continue outsourcing all of our digital infrastructure to billionaires that would rather escape Earth by building space rockets, there will be no Dutch expertise left.”
Kathmann’s measures specifically call on the government to stop the migration of Dutch information and communications technology to American cloud services, the creation of a Dutch national cloud, the repatriation of the .nl top-level domain to systems operating within the Netherlands, and for the preparation of risk analyses and exit strategies for all government systems hosted by US tech giants. The other measures make similar calls for eliminating the presence of US tech companies in government systems and the preference of local alternatives.
“We have identified the causes of our full dependency on US services,” Kathmann told us. “We have to start somewhere – by pausing all thoughtless migrations to American hyperscalers, new opportunities open up for Dutch and European providers.”
The motions passed by the Dutch parliament come as the Trump administration ratchets up tensions with a number of US allies – the EU among them. Nearly 100 EU-based tech companies and lobbyists sent an open letter to the European Commission this week asking it to find a way to divest the bloc from systems managed by US companies due to “the stark geopolitical reality Europe is now facing.”
The only question is, how did the retards in charge of procurement allow themselves to buy 100% US and closed source vendor lock-in in the first place, gutting the EU software development market?
Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company “may also monitor and record your video and audio interactions with other users.” Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is “recorded and stored temporarily” both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo’s Community Guidelines, the company writes.
That reporting feature lets a user “review a recording of the last three minutes of the latest three GameChat sessions” to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that “these recordings are available only if the report is submitted within 24 hours,” suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it “may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats.” If you don’t consent to the potential for such recording and sharing, you’re prevented from using GameChat altogether.
Nintendo is extremely clear that the purpose of its recording and review system is “to protect GameChat users, especially minors” and “to support our ability to uphold our Community Guidelines.” This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. […] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user’s privacy. Still, if you’re paranoid about Nintendo potentially seeing and hearing what’s going on in your living room, it’s good to at least be aware of it.
The United States government has collected DNA samples from upwards of 133,000 migrant children and teenagers—including at least one 4-year-old—and uploaded their genetic data into a national criminal database used by local, state, and federal law enforcement, according to documents reviewed by WIRED. The records, quietly released by the US Customs and Border Protection earlier this year, offer the most detailed look to date at the scale of CBP’s controversial DNA collection program. They reveal for the first time just how deeply the government’s biometric surveillance reaches into the lives of migrant children, some of whom may still be learning to read or tie their shoes—yet whose DNA is now stored in a system originally built for convicted sex offenders and violent criminals.
[…]
Spanning from October 2020 through the end of 2024, the records show that CBP swabbed the cheeks of between 829,000 and 2.8 million people, with experts estimating that the true figure, excluding duplicates, is likely well over 1.5 million. That number includes as many as 133,539 children and teenagers. These figures mark a sweeping expansion of biometric surveillance—one that explicitly targets migrant populations, including children.
[…]
Under current rules, DNA is generally collected from anyone who is also fingerprinted. According to DHS policy, 14 is the minimum age at which fingerprinting becomes routine.
[…]
“Taking DNA from a 4-year old and adding it into CODIS flies in the face of any immigration purpose,” she says, adding, “That’s not immigration enforcement. That’s genetic surveillance.”
In 2024, Glaberson coauthored a report called “Raiding the Genome” that was the first to try to quantify DHS’s 2020 expansion of DNA collection. It found that if DHS continues to collect DNA at the rate the agency itself projects, one-third of the DNA profiles in CODIS by 2034 will have been taken by DHS, and seemingly without any real due process—the protections that are supposed to be in place before law enforcement compels a person to hand over their most sensitive information.
A few weeks ago Walled Culture explored how the leaders in the generative AI world are trying to influence the future legal norms for this field. In the face of a powerful new form of an old technology – AI itself has been around for over 50 years – those are certainly needed. Governments around the world know this too: they are grappling with the new issues that large language models (LLMs), generative AI, and chatbots are raising every day, not least in the realm of copyright. For example, one EU body, EUIPO, has published a 436-page study “The Development Of Generative Artificial Intelligence From A Copyright Perspective”. Similarly, the US Copyright Office has produced a three-part report that “analyzes copyright law and policy issues raised by artificial intelligence”. The first two parts were on Digital Replicas and Copyrightability. The last part, just released in a pre-publication form, is on Generative AI Training. It is one of the best introductions to that field, and not too long – only 113 pages.
Alongside these government moves to understand this area, there are of course efforts by the copyright industry itself to shape the legal landscape of generative AI. Back in March, Walled Culture wrote about a UK campaign called “Make It Fair”, and now there is a similar attempt to reduce everything to a slogan by a European coalition of “authors, performers, publishers, producers, and cultural enterprises”. The new campaign is called “Stay True to the Act” – the Act in question being the EU Artificial Intelligence Act. The main document explaining the latest catchphrase comes from the European Publishers Council, and provides numerous insights into the industry’s thinking here. It comes as no surprise to read the following:
Let’s be clear: our content—paid for through huge editorial investments—is being ingested by AI systems without our consent and without compensation. This is not innovation; it is copyright theft.
As Walled Culture explained in March, that’s not true: material is not stolen, it is simply analysed as part of the AI training. Analysing texts or images is about knowledge acquisition, not copyright infringement.
In the Stay True to the Act document, this tired old trope of “copyright theft” leads naturally to another obsession of the copyright world: a demand for what it calls “fair licences”. Walled Culture the book (free digital versions available) noted that this is something that the industry has constantly pushed for. Back in 2013, a series of ‘Licences for Europe’ stakeholder dialogues were held, for example. They were based on the assumption that modernising copyright meant bringing in licensing for everything that occurred online. If a call for yet more licensing is old hat, the campaign’s next point is a novel one:
AI systems don’t just scrape our articles—they also capture our website layouts, our user activity, and data that is critical to our advertising models.
It’s hard to understand what the problem is here, other than the general concern about bots visiting and scraping sites – something that is indeed getting out of hand in terms of volume and impact on servers. It’s not as if generative AI cares about Web site design, and it’s hard to see what data about advertising models can be gleaned. It’s also worth nothing that this is the only point where members of the general public are mentioned in the entire document, albeit only as “users”. When it comes to copyright, publishers don’t care about the rights or the opinions of ordinary citizens. Publishers do care about journalists, at least to the following extent:
AI-generated content floods the market with synthetic articles built from our journalism. Search engines like Google’s and chatbots like ChatGPT, increasingly serve AI summaries which is wiping out the traffic we rely on, especially from dominant players.
The statement that publishers “rely on” traffic from search engines is an unexpected admission. The industry’s main argument for the “link tax” that is now part of the EU Copyright Directive was that search engines were giving nothing significant back when their search results linked to the original article, and should therefore pay something. Now publishers are admitting that the traffic from search engines is something they “rely on”. Alongside that significant U-turn on the part of the publishers, there is a serious general point about journalism in the age of AI:
These [generative AI] tools don’t create journalism. They don’t do fact-checking, hold power to account, or verify sources. They operate with no editorial standards, no legal liability—and no investment in the public interest. And yet, without urgent action, there is a danger they will replace us in the digital experience.
This is an extremely important issue, and the publishers are right to flag it up. But demanding yet more licensing agreements with AI companies is not the answer. Even if the additional monies were all spent on bolstering reporting – a big “if” – the sums involved would be too small to matter. Licensing does not address the root problem, which is that important kinds of journalism need to be supported and promoted in new ways.
One solution is that adopted by the Guardian newspaper, which is funded by its readers who want to read and sustain high-quality journalism. This could be part of a wider move to the “true fans” idea discussed in Walled Culture the book. Another approach is for more government support – at arm’s length – for journalism of the kind produced by the BBC, say, where high editorial standards ensure that fact-checking and source verification are routinely carried out – and budgeted for.
Complementing such direct support for journalism, new laws are needed to disincentivise the creation of misleading fake news stories and outright lies that increasingly drown out the truth. The Stay True to the Act document suggests “platform liability for AI-generated content”, and that could be part of the answer; but the end users who produce such material should also face consequences for their actions.
In its concluding section, “3-Pillar Model for the Future – and Why Licensing is Essential”, the document bemoans the fact that advertising revenue is “declining in a distorted market dominated by Google and Meta”. That is true, but only because publishers have lazily acquiesced in an adtech model based on real-time bidding for online ads powered by the constant surveillance of visitors to Web sites. A better approach is to use contextual advertising, where ads are shown according to the material being viewed. This not only requires no intrusive monitoring of the personal data of visitors, but has been found to be more effective than the current approach.
Moreover, in a nice irony, the new generation of LLMs make providing contextual advertising extremely easy, since they can analyse and categorise online material rapidly for the purpose of choosing suitable ads to be displayed. Sadly, publishers’ visceral hatred of the new AI technologies means that they are unable to see these kind of opportunities alongside the threats.
23andMe Holding Co. (“23andMe” or the “Company”) (OTC: MEHCQ), a leading human genetics and biotechnology company, today announced that it has entered into a definitive agreement for the sale of 23andMe to Regeneron Pharmaceuticals, Inc. (“Regeneron”) (NASDAQ: REGN), a leading U.S.-based, NASDAQ-listed biotechnology company that invents, develops and commercializes life-transforming medicines for people with serious diseases. The agreement includes Regeneron’s commitment to comply with the Company’s privacy policies and applicable law, process all customer personal data in accordance with the consents, privacy policies and statements, terms of service, and notices currently in effect and have security controls in place designed to protect such data.
[…]
Under the terms of the agreement, Regeneron will acquire substantially all of the assets of the Company, including the Personal Genome Service (PGS), Total Health and Research Services business lines, for a purchase price of $256 million. The agreement does not include the purchase of the Company’s Lemonaid Health subsidiary, which the Company plans to wind down in an orderly manner, subject to and in accordance with the agreement.
Boeing and the Department of Justice have reached an “agreement in principle” that will keep the airplane manufacturer from facing criminal charges for allegedly misleading regulators about safety features on its 737 Max plane before two separate crashes that killed 346 people. The tentative deal, according to a court filing, will see Boeing pay out $1.1 billion in penalties and safety investments, as well as set aside an additional $444 million for the families of victims involved in the crashes.
Boeing’s payments will include $487.2 million paid as a criminal monetary penalty and $455 million to “strengthen the Company’s compliance, safety, and quality programs.” The company will also promise to “improve the effectiveness of its anti-fraud compliance and ethics program” to hopefully avoid the whole allegedly lying to the government thing. The DOJ is also requiring Boeing’s Board of Directors to meet with the families of victims to “hear directly from them about the impact of the Company’s conduct, as well as the Company’s compliance, safety, and quality programs.”
While the settlement will result in more money being made available to the surviving families of the victims, the resolution is not what some of the relatives were looking for. Paul Cassell, an attorney for some of the families, issued a statement earlier this week when word of the agreement started circulating: “Although the DOJ proposed a fine and financial restitution to the victims’ families, the families that I represent contend that it is more important for Boeing to be held accountable to the flying public.”
The families have objected to the potential of a plea deal for some time. When the DOJ first worked toward finalizing an agreement last year, Cassell said Boeing was getting “sweetheart” treatment. Mark Lindquist, another lawyer who represents victim families, said at the time that the deal “fails to acknowledge that the charged crime of Conspiracy to Defraud caused the death of 346 people. This is a sore spot for victim families who want accountability and acknowledgment.”
[…]
The case against Boeing stemmed from the company’s alleged attempts to conceal potential safety concerns with its 737 Max aircraft during the Federal Aviation Administration’s certification process. The company is accused of failing to disclose that its software system could turn the plane’s nose down without pilot input based on sensor data. Faulty readings from that sensor caused two separate flights to go nose down, and pilots were unable to override it and gain control, ultimately resulting in the planes crashing.
New Orleans’ police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city’s police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans’ use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports.
“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler, an ACLU deputy director. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public.” Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.
Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US citiesandstates have placed restrictions on how law enforcement can use facial recognition, those limits won’t do anything to protect privacy if they’re routinely ignored by officers.
Read the full story on the New Orleans PD’s surveillance program at The Washington Post.
If there’s one thing the Federal Bureau of Investigation does well, it’s mass surveillance. Several years ago, then attorney general William Barr established an internal office to curb the FBI’s abuse of one controversial surveillance law. But recently, the FBI’s long-time hater (and, ironically, current director) Kash Patel shut down the watchdog group with no explanation.
On Tuesday, the New York Times reported that Patel suddenly closed the Office of Internal Auditing that Barr created in 2020. The office’s leader, Cindy Hall, abruptly retired. People familiar with the matter told the outlet that the closure of the aforementioned watchdog group alongside the Office of Integrity and Compliance are part of internal reorganization. Sources also reportedly said that Hall was trying to expand the office’s work, but her attempts to onboard new employees were stopped by the Trump administration’s hiring freezes.
The Office of Internal Auditing was a response to controversy surrounding the FBI’s use of Section 702 of the Foreign Intelligence Surveillance Act. The 2008 law primarily addresses surveillance of non-Americans abroad. However, Jeramie Scott, senior counselor at the Electronic Privacy Information Center, told Gizmodo via email that the FBI “has repeatedly abused its ability to search Americans’ communications ‘incidentally’ collected under Section 702” to conduct warrantless spying.
Patel has not released any official comment regarding his decision to close the office. But Elizabeth Goitein, senior director at the Brennan Center for Justice, told Gizmodo via email, “It is hard to square this move with Mr. Patel’s own stated concerns about the FBI’s use of Section 702.”
Last year, Congress reauthorized Section 702 despite mounting concerns over its misuses. Although Congress introduced some reforms, the updated legislation actually expanded the government’s surveillance capabilities. At the time, Patel slammed the law’s passage, stating that former FBI director Christopher Wray, who Patel once tried to sue, “was caught last year illegally using 702 collection methods against Americans 274,000 times.” (Per the New York Times, Patel is likely referencing a declassified 2023 opinion by the FISA court that used the Office of Internal Auditing’s findings to determine the FBI made 278,000 bad queries over several years.)
According to Goitein, the office has “played a key role in exposing FBI abuses of Section 702, including warrantless searches for the communication of members of Congress, judges, and protesters.” And ironically, Patel inadvertently drove its creation after attacking the FBI’s FISA applications to wiretap a former Trump campaign advisor in 2018 while investigating potential Russian election interference. Trump and his supporters used Patel’s attacks to push their own narrative dismissing any concerns. Last year, former representative Devin Nunes, who is now CEO of Truth Social, said Patel was “instrumental” to uncovering the “hoax and finding evidence of government malfeasance.”
Although Patel mostly peddled conspiracies, the Justice Department conducted a probe into the FBI’s investigation that raised concerns over “basic and fundamental errors” it committed. In response, Barr created the Office of Internal Auditing, stating, “What happened to the Trump presidential campaign and his subsequent Administration after the President was duly elected by the American people must never happen again.”
But since taking office, Patel has changed his tune about FISA. During his confirmation hearing, Patel referred to Section 702 as a “critical tool” and said, “I’m proud of the reforms that have been implemented and I’m proud to work with Congress moving forward to implement more.” However, reforms don’t mean much by themselves. As Goitein noted, “Without a separate office dedicated to surveillance compliance, [the FBI’s] abuses could go unreported and unchecked.”
The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region.
The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes.
“The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area,” stated Volodin.
Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:
Residence location
Fingerprint
Face photograph
Real-time geo-location monitoring
“If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days,” the high-ranking politician explained.
The measures will not apply to diplomats of foreign countries or citizens of Belarus.
Foreigners attempting to avoid their obligation in relation to the new law will be added to a registry of monitored individuals and deported from Russia.
Russian internet freedom observatory Roskomsvoboda’s reactions to this proposal reflect skepticism and concern.
Lawyer Anna Minushkina noted that the proposal violates Articles 23 and 24 of the Russian Constitution, guaranteeing the right to privacy.
President of the Uzbek Community in Moscow, Viktor Teplyankov, characterized the initiative as “ill-conceived and difficult to implement,” expressing doubts about its feasibility.
Finally, PSP Foundation’s Andrey Yakimov warned that such aggressive measures are bound to deter potential labor migrants, creating a different problem in the country.
The proposal hasn’t reached its final form yet, and specifics like what happens in the case of device theft/loss or similar technical or practical obstacles are to be addressed in the upcoming period during meetings between the Ministry and regional authorities.
The mass-surveillance experiment will run until September 2029, and if deemed successful, the mechanism will extend to cover more parts of the country.
According to a ruling by the Berlin Regional Court, Google must disclose to its users which of its more than 70 services process their data when they register for an account.The civil chamber thus upheld a lawsuit filed by the German Association of Consumer Organizations (vzbv).The consumer advocates had complained that neither the “express personalization” nor the alternative “manual personalization” complied with the legal requirements of the European General Data Protection Regulation (GDPR).
The ruling against Google Ireland Ltd. was handed down on March 25, 2025, but was only published on Friday (case number 15 O 472/22).The decision is not yet legally binding because the internet company has appealed the ruling.Google stated that it disagrees with the Regional Court’s decision.
What does Google process data for?
The consumer advocates argued that consumers must know what Google processes their data for when registering.Users must be able to freely decide how their data is processed.The judges at the Berlin Regional Court confirmed this legal opinion.The ruling states: “In this case, transparency is lacking simply because the defendant does not provide information about the individual Google services, Google apps, Google websites, or Google partners for which the data is to be used.”For this reason, the scope of consent is completely unknown to the user.
Google: Account creation has changed
Google stated that the ruling concerned an old account creation process that had since been changed.“What hasn’t changed is our commitment to enabling our users to use Google on their terms, with clear choices and control options based on extensive research, testing, and guidelines from European data protection authorities,” it stated.In the proceedings, Google argued that listing all services would result in excessively long text and harm transparency.This argument was rejected by the court. In the court’s view, information about the scope of consent is among the minimum details required by law.The regional court was particularly concerned that with “Express Personalization,” users only had the option of consenting to all data usage or canceling the process.A differentiated refusal was not possible.Even with “Manual Personalization,” consumers could not refuse the use of the German location.
The Commission Recommendation of 4 May 2023 on combating online piracy of sports and other live events encourages Member States and relevant stakeholders to take effective, appropriate and proportionate measures to combat unauthorised retransmissions of such events.
An amendment to the data bill requiring AI companies to reveal which copyrighted material is used in their models was backed by peers, despite government opposition.
It is the second time parliament’s upper house has demanded tech companies make clear whether they have used copyright-protected content.
The vote came days after hundreds of artists and organisations including Paul McCartney, Jeanette Winterson, Dua Lipa and the Royal Shakespeare Company urged the prime minister not to “give our work away at the behest of a handful of powerful overseas tech companies”.
The bill will now return to the House of Commons. If the government removes the Kidron amendment, it will set the scene for another confrontation in the Lords next week.
Lady Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.
“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”
The government’s copyright proposals are the subject of a consultation due to report back this year, but opponents of the plans have used the data bill as a vehicle for registering their disapproval.
The main government proposal is to let AI firms use copyright-protected work to build their models without permission, unless the copyright holders signal they do not want their work to be used in that process – a solution that critics say is impractical and unworkable.
The problem is that the actual creators never see much of the money from copyright income – that all goes to the giant copyright holding behemoths who keep it for themselves.
And considering the way that AI systems are trained, they do not keep a copy of the work ingested, just like a human doesn’t keep a copy. So to say that a system can only ingest a work if permission is given is just like saying a specific person can only read that without permission.
So anything that is freely available is fair game. If an AI wants to read a book, they should buy that book. Once.
Moderna’s mRNA-based flu and covid-19 vaccine could provide the best of both worlds—if it’s actually ever approved by the Food and Drug Administration.
This week, scientists at Moderna published data from a Phase III trial testing the company’s combination vaccine, codenamed mRNA-1083. Individuals given mRNA-1083 appeared to generate the same or even greater immune response compared to those given separate vaccines, the researchers found. But the FDA’s recent policy change on vaccine approvals, orchestrated by Health Secretary Robert F. Kennedy Jr, could imperil the development of this and other future vaccines.
The trial involved 8,000 people split into two age groups: those between the ages of 50 and 64, and those over 65. People were randomly given mRNA-1083 (plus a placebo) or two already approved flu and covid-19 vaccines.
The vaccine seemed effective across both age groups, with mRNA-1083 participants showing at least the same level of humoral immune response (antibody-based) to circulating flu and covid-19 strains as participants who were given the separate vaccines. On average, this response was actually higher to the flu strains in particular among those given mRNA-1083. The experimental vaccine also appeared to be safe and well-tolerated, as the authors explained in their paper, published Wednesday in JAMA.
The study results are certainly encouraging, and typically they would pave the way toward a surefire FDA approval. But the political situation has changed for the worse. The Department of Health and Human Services recently mandated an overhaul of the vaccine approval process, one that will require all new vaccines to undergo placebo-controlled trials to receive approval.
While many experimental vaccines today are placebo-tested (including the original covid-19 vaccines), it’s unclear whether this order will also apply to vaccines that can be compared to existing vaccines, like the combination mRNA-1083 vaccine, or to vaccines that have to be regularly updated to match fast-evolving viruses like the flu and covid-19.
Some vaccine experts have said that these changes are unnecessary and potentially unethical, since it could leave some people vulnerable to an infection that already has a vaccine. The new rule also might delay the availability of upcoming seasonal vaccines, particularly the current covid-19 shots.
A potentially important wrinkle for the mRNA-1083 vaccine is that no mRNA-based vaccine for the flu is currently approved. That reality could very well be all that the FDA needs to demand further placebo-controlled trials. RFK Jr. and other recent Trump appointees have also been highly skeptical of mRNA-based vaccines in general, despite no strong evidence that these vaccines are significantly less safe than other types. Kennedy, who has a long history of supporting the anti-vaccination movement, has even wrongly declared that the mRNA covid-19 vaccine was the “deadliest vaccine ever made.”
Moderna stated last week it doesn’t expect its mRNA-1083 vaccine to be approved before 2026, following the FDA’s request for late-stage data showing the vaccine’s effectiveness against flu specifically. But it’s worth wondering if even that timeline is now in jeopardy under the current public health regime.
[…] “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”
The agreement settles several claims Texas made against the search giant in 2022 related to geolocation, incognito searches and biometric data. The state argued Google was “unlawfully tracking and collecting users’ private data.”
Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant.
Google spokesperson José Castañeda said the agreement settles an array of “old claims,” some of which relate to product policies the company has already changed.
[…]
Texas previously reached two other key settlements with Google within the last two years, including one in December 2023 in which the company agreed to pay $700 million and make several other concessions to settle allegations that it had been stifling competition against its Android app store.
Meta has also agreed to a $1.4 billion settlement with Texas in a privacy lawsuit over allegations that the tech giant used users’ biometric data without their permission.
A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China’s access to advanced semiconductor technology.
Called the “Chip Security Act,” the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.
“With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security,” Republican Senator Tom Cotton of Arkansas said.
The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.
The move comes days after U.S. President Donald Trump said he would rescind and modify a Biden-era rule that curbed the export of sophisticated AI chips with the goal of protecting U.S. leadership in AI and blocking China’s access.
U.S. Representative Bill Foster, a Democrat from Illinois, also plans to introduce a bill on similar lines in the coming weeks, Reuters reported on Monday.
Restricting China’s access to AI technology that could enhance its military capabilities has been a key focus for U.S. lawmakers and reports of widespread smuggling of Nvidia’s (NVDA.O)
Of course it adds another layer of the US government spying on you if you want to buy a graphics card too. I’m not sure how anyone being able to track all your PCs does not compromise national security.