After California, now Colorado Lawmakers Now Push for Age Verification at the Operating System

Well, I will just repeat what I said about California (A new California law says all operating systems, including Linux, need to have some form of age verification at account setup) – NB people on this LinkedIn post were not too happy about that either:

Here we see the creeping sliding scale that is the terror of Age Verification. First of all, an OS does not need an age – it’s like saying any technology needs age verification: all gadgets use an OS, whether it is your washing machine, your smart light switch, your PSP or your PC. Second of all, an OS has no business being forced online – they are supposed to be non-cloud, personal, non-connected unless you want them to connect. Age verification requires external suppliers and so you would need to connect to perform the age verification – and send who knows what kind of other personal data. Eg Windows sends hardware data (along with a whole load of other data) that works much like a fingerprint. This makes it easy to track a users movements online. This is one reason why people want to bypass the online account creation on Windows and use local accounts.

For more on the horrors of age verification, see https://www.linkielist.com/?s=age+verification&submit=Search

As more US states consider online age-verification requirements, two Colorado lawmakers want to implement the age checks at the operating system-level, after California enacted a similar law. 

Colorado’s SB26-051, introduced last month, would require operating systems to register the owner’s age, which third-party apps can then leverage to determine if the user is an adult. The bill calls for the device owner to register their birthdate or age, but for the purposes of creating an “age bracket,” which can then be shared to an app developer through an API to learn their age range, according to BiometricUpdate.com.  

The bill comes from state Sen. Matt Ball and Rep. Amy Paschal, both Democrats. “The intent is to create thoughtful safeguards for kids online through a privacy-forward framework for age assurance,” Ball told PCMag. “Unlike some laws in other states, SB 51 doesn’t require users to share personally identifiable information or use facial recognition technology.”

Ball also said the legislation was based on California’s bill AB 1043, which was passed last year. It too requires OS makers to create a way for the device owner to register their age bracket, which can then be shared to app developers over an API. The California law starts to take effect January 1, 2027.

Ball added: “SB 26-51 is very closely modeled on it. One of the reasons for bringing SB 51 was that the tech and software industry is already complying with AB 1043, so there’s minimal added burden.”

Note here: they are not. Several Linux distributions have in fact already changed their ToU making it illegal for the software to be used in California.

The legislation also promises to centralize the age check through the OS, rather than mandating that each app enforce their own age-verification mechanism, which can involve scanning the user’s official ID, thus raising privacy and security concerns. The bill also forbids the sharing of the age-bracket data for any other purpose. 

But it looks like it’s easy to bypass the age check proposed by SB26-051. The legislation itself doesn’t mention any state ID check to verify the owner’s age. In addition, the bill doesn’t seem to cover websites, only apps and app stores. 

Source: Colorado Lawmakers Push for Age Verification at the Operating System Level | PCMag

How Copyright Litigation Over Anne Frank’s Diary Could Impact The Fate Of VPNs In The EU | Techdirt

Link

“The Diary of a Young Girl” is a Dutch language diary written by the young Jewish writer Anne Frank while she was in hiding for two years with her family during the Nazi occupation of the Netherlands. Although the diary and Anne Frank’s death in the Bergen-Belsen concentration camp are well known, few are aware that the text has a complicated copyright history – one that could have important implications for the legal status and use of Virtual Private Networks (VPNs) in the EU. TorrentFreak explains the copyright background:

These copyrights are controlled by the Swiss-based Anne Frank Fonds, which was the sole heir of Anne’s father, Otto Frank. The Fonds states that many print versions of the diary remain protected for decades, and even the manuscripts are not freely available everywhere.

In the Netherlands, for example, certain sections of the manuscripts remain protected by copyright until 2037, even though they have entered the public domain in neighboring countries like Belgium.

A separate foundation, the Netherlands-based Anne Frank Stichting, wanted to publish a scholarly edition of Anne Frank’s writing, at least in those parts of the world where her diary was in the public domain:

To navigate these conflicting laws, the Dutch Anne Frank Stichting published a scholarly edition online using “state-of-the-art” geo-blocking to prevent Dutch residents from accessing the site. Visitors from the Netherlands and other countries where the work is protected are met with a clear message, informing them about these access restrictions.

However, the Anne Frank Fonds was unhappy with this approach, and took legal action. Its argument was that such geo-blocking could be circumvented with VPNs, and so its copyrights in the Netherlands could be infringed upon by those using VPNs. The lower courts in the Netherlands dismissed this argument, and the case is now before the Dutch Supreme Court. Beyond the specifics of the Anne Frank scholarly edition, there are important issues regarding the use of VPNs to get around geo-blocking. Because of the potential knock-on effect the ruling in this case will have on EU law, the Dutch Supreme Court has asked for guidance from the EU’s top court, the Court of Justice of the European Union (CJEU).

The CJEU has yet to rule on the issues raised. But one of the court’s advisors, Advocate General Rantos, has published a preliminary opinion, as is normal in such cases. Although that advice is not binding on the CJEU, it often provides some indication as to how the court may eventually decide. On the main issue of whether the ability of people to circumvent geo-blocking is a problem, Rantos writes:

the fact that users manage to circumvent a geo-blocking measure put in place to restrict access to a protected work does not, in itself, mean that the entity that put the geo-blocking in place communicates that work to the public in a territory where access to it is supposed to be blocked. Such an interpretation would make it impossible to manage copyright on the internet on a territorial basis and would mean that any communication to the public on the internet would be global.

Moreover:

As the [European] Commission pointed out in its written observations, the holder of an exclusive right in a work does not have the right to authorise or prohibit, on the basis of the right granted to it in one Member State, communication to the public in another Member State in which that right has ceased to have effect.

Or, more succinctly: “service providers in the public domain country cannot be subject to unreasonable requirements”. That’s a good, common-sense view. But perhaps just as important is the following comment by Rantos regarding the use of VPNs to circumvent geo-blocking:

as the Commission points out in its observations, VPN services are legally accessible technical services which users may, however, use for unlawful purposes. The mere fact that those or similar services may be used for such purposes is not sufficient to establish that the service providers themselves communicate the protected work to the public. It would be different if those service providers actively encouraged the unlawful use of their services.

That’s an important point at a time when VPNs are under attack from some governments because of concerns about possible copyright infringement by those using them.

The hope has to be that the CJEU will agree with its Advocate General’s sensible and fair analysis, and will rule accordingly. But there is another important aspect to this story. The basic issue is that the Anne Frank Stichting wants to make its scholarly edition of Anne Frank’s diary available as widely as possible. That seems a laudable aim, since it will increase understanding and appreciation of the young woman’s remarkable diary by publishing an academically rigorous version. And yet the Anne Frank Fonds has taken legal action to stop that move, on the grounds that it would represent an infringement of its intellectual monopoly in some parts of Frank’s work, in some parts of the world. The current dispute is another clear example of how copyright has become for some an end in itself, more important than the things that it is supposed to promote.

Source: How Copyright Litigation Over Anne Frank’s Diary Could Impact The Fate Of VPNs In The EU | Techdirt

A new California law says all operating systems, including Linux, need to have some form of age verification at account setup

Quote

Here we see the creeping sliding scale that is the terror of Age Verification. First of all, an OS does not need an age – it’s like saying any technology needs age verification: all gadgets use an OS, whether it is your washing machine, your smart light switch, your PSP or your PC. Second of all, an OS has no business being forced online – they are supposed to be non-cloud, personal, non-connected unless you want them to connect. Age verification requires external suppliers and so you would need to connect to perform the age verification – and send who knows what kind of other personal data. Eg Windows sends hardware data (along with a whole load of other data) that works much like a fingerprint. This makes it easy to track a users movements online. This is one reason why people want to bypass the online account creation on Windows and use local accounts.

For more on the horrors of age verification, see https://www.linkielist.com/?s=age+verification&submit=Search

The government of California is implementing a law that requires operating system providers to implement some form of age verification into their account setup procedures.

Assembly Bill No. 1043 was approved by California governor Gavin Newsom in October of last year, and becomes active on January 1, 2027 (via The Lunduke Journal). The bill states, among other factors, that “An operating system provider shall do all of the following:”

“(1) Provide an accessible interface at account setup that requires an account holder to indicate the birth date, age, or both, of the user of that device for the purpose of providing a signal regarding the user’s age bracket to applications available in a covered application store. Related articles

“(2) Provide a developer who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface that identifies, at a minimum, which of the following categories pertains to the user.”

The categories are broken into four sections: users under 13 years of age, over 13 years of age under 16, at least 16 years of age and under 18, and “at least 18 years of age.”

In essence, while the bill doesn’t seem to require the most egregious forms of age verification (face scans or similar), it does require OS providers to collect age verification of some form at the account/user creation stage—and to be able to pass a segmented version of that information to outside developers upon request.

That’s likely no big deal for Windows, which already requires you to enter your date of birth during the Microsoft Account setup procedure. However, the idea that all operating system providers need to comply (in California) has drawn a fair degree of ire from certain Linux communities.

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

“This is basically impossible for California to enforce” says CatoDomine on the Linuxmint subreddit. “Even if Linux Mint decides to add some kind of age verification, to comply with CA law, there’s no reason anyone would choose that version.”

Comment from r/linuxmint

“It’s more likely they will put a disclaimer on their website: “not for use in California.”

Looking at the wider picture, however, mandatory age verification appears to be a growing trend. The UK government’s current implementation under the Online Safety Act has come under heavy fire for privacy concerns, while platforms like Discord have received similar critique for their face-scanning age verification efforts, not least because of associations with companies that may not be using the collected data for mere age-confirmation purposes.

And while this implementation is California-specific, it does speak to a wider desire from governments to enforce age verification on a legal level—even if in this case, it seems virtually impossible to effectively enact.

Source: A new California law says all operating systems, including Linux, need to have some form of age verification at account setup | PC Gamer

Nearby Glasses Warns You When a Glasshole is Nearby

The app, called Nearby Glasses, has one sole purpose: Look for smart glasses nearby and warn you.

Get It On Google Play

This app notifies you when smart glasses are nearby. It uses company identificators in the Bluetooth data sent out by these. Therefore, there likely are false positives (e.g. from VR headsets). Hence, please proceed with caution when approaching a person nearby wearing glasses. They might just be regular glasses, despite this app’s warning.

The app’s author Yves Jeanrenaud takes no liability whatsoever for this app nor it’s functionality. Use at your own risk. By technical design, detecting Bluetooth LE devices might sometimes just not work as expected. I am no graduated developer. This is all written in my free time and with knowledge I taught myself.
False positives are likely. This means, the app Nearby Glasses may notify you of smart glasses nearby when there might be in fact a VR headset of the same manufacturer or another product of that company’s breed. It may also miss smart glasses nearby. Again: I am no pro developer.
However, this app is free and it’s source is available (though it’s not considered foss due to the non-commercial restrition), you may review the code, change it and re-use it (under the license).
The app Nearby Glasses does not store any details about you or collects any information about you or your phone. There are no telemetry, no ads, and no other nuisance. If you install the app via Play Store, Google may know something about you and collect some stats. But the app itself does not.
If you choose to store (export) the logfile, that is completely up to you and your liability where this data go to. The logs are recorded only locally and not automatically shared with anyone. They do contain little sensitive data; in fact, only the manufacturer ID codes of BLE devices encountered.

Use with extreme caution! As stated before: There is no guarantee that detected smart glasses are really nearby. It might be another device looking technically (on the BLE adv level) similar to smart glasses.
Please do not act rashly. Think before you act upon any messages (not only from this app).

Why?

  • Because I consider smart glasses an intolerable intrusion, consent neglecting, horrible piece of tech that is already used for making various and tons of equally truely disgusting ‘content’. 1, 2
  • Some smart glasses feature small LED signifying a recording is going on. But this is easily disabled, whilst manufacturers claim to prevent that and take no responsibility at all (tech tends to do that for decades now). 3
  • Smart glasses have been used for instant facial recognition before 4 and reportedly will be out of the box 5. This puts a lot of people in danger.
  • I hope this is app is useful for someone.

How?

  • It’s a simple rather heuristic approach. Because BLE uses randomised MAC and the OSSID are not stable, nor the UUID of the service announcements, you can’t just scan for the bluetooth beacons. And, to make thinks even more dire, some like Meta, for instance, use proprietary Bluetooth services and UUIDs are not persistent, we can only rely on the communicated device names for now.
  • The currently most viable approach comes from the Bluetooth SIG assigned numbers repo. Following this, the manufacturer company’s name shows up as number codes in the packet advertising header (ADV) of BLE beacons.
  • this is what BLE advertising frames look like:
Frame 1: Advertising (ADV_IND)
Time:  0.591232 s
Address: C4:7C:8D:1E:2B:3F (Random Static)
RSSI: -58 dBm

Flags:
  02 01 06
    Flags: LE General Discoverable Mode, BR/EDR Not Supported

Manufacturer Specific Data:
  Length: 0x1A
  Type:   Manufacturer Specific Data (0xFF)
  Company ID: 0x058E (Meta Platforms Technologies, LLC)
  Data: 4D 45 54 41 5F 52 42 5F 47 4C 41 53 53

Service UUIDs:
  Complete List of 16-bit Service UUIDs
  0xFEAA
  • According to the Bluetooth SIG assigned numbers repo, we may use these company IDs:
    • 0x01AB for Meta Platforms, Inc. (formerly Facebook)
    • 0x058E for Meta Platforms Technologies, LLC
    • 0x0D53 for Luxottica Group S.p.A (Who manufacturers the Meta Ray-Bans)
    • 0x03C2 for Snapchat, Inc., that makes SNAP Spectacles They are immutable and mandatory. Of course, Meta and other manufacturers also have other products that come with Bluetooth and therefore their ID, e.g. VR Headsets. Therefore, using these company ID codes for the app’s scanning process is prone to false positives. But if you can’t see someone wearing an Occulus Rift around you and there are no buildings where they could hide, chances are good that it’s smart glasses instead.
  • During pairing, the smart glasses usually emit their product name, so we can scan for that, too. But it’s rare we will see that in the field. People with the intention to use smart glasses in bars, pubs, on the street, and elsewhere usually prepare for that beforehand.
  • When the app recognised a Bluetooth Low Energy (BLE) device with a sufficient signal strength (see RSI below), it will push an alert message. This shall help you to act accordingly.

[…]

Source: Github repo

Age verification checks are now in force in the UK because of the Online Safety Act, but with the Discord fallout, it seems like one bad idea after another

Currently, I can’t check my Bluesky direct messages until I’ve allowed the Epic Games-owned KWS to look at either my bank card, my ID, or my wizened visage. As I’m based in the UK, it’s not just Bluesky I’ve got to worry about either, with similar verification processes now present on Reddit, Discord, and even my partner’s Xbox.

This is all due to the Online Safety Act, which came into effect in the UK last year. For many, these age checks are an annoyance at best—but they also represent something that will have ramifications far beyond the British Isles. The UK’s Act was designed in part to ensure children in the UK could not easily access “harmful content.” This is a broad term that includes but is not limited to pornography, content that promotes “self-harm, eating disorders, or suicide,” and “bullying”.

To comply with the act and differentiate children from the adults, many platforms have opted for age-gates like the one I’m encountering on Bluesky. Almost 70% of Brits surveyed shortly after the Online Safety Act came into effect said they supported it…though 64% didn’t think it would be all that effective. Indeed, I could log into a VPN to get past the UK-based Bluesky block—though unfortunately for me, I am stubborn, lazy, and cheap (apologies if you’ve been trying to get ahold of me).

Besides all that, I’m not especially keen to hand over my personal data to a third-party age verification vendor such as KWS for data privacy reasons. As recently as October, a Discord security breach may have leaked 70,000 age-verification ID photos. Discord’s primary age-verification partner, K-ID, was keen to clarify that it was not involved.

As Jacob has previously outlined, there are better ways to implement age checks. As it stands, though, I’m not naive enough to think the data I keep elsewhere is in hands that are any safer. However, not submitting to an age assurance check makes for one less point of failure from which my likeness or even my official documents can leak out.

Discord first announced it would be using Brits as age assurance guinea pigs back in April 2025, but it turns out that may have all been prologue. Just in case you’ve been napping under a cool mossy rock for the last while, the social platform caused quite a stir this month when it announced it would be rolling out age verifying facial scans and ID checks globally this March. The case can be made that it is ‘complying in advance,’ as the UK’s approach to online safety potentially serves as a preview for PC gamers further afield.

Discord hackers distribute malware that can stay persistent for months

(Image credit: TheDigitalArtist – Pixabay & Discord)

On the one hand, yeah, I’d rather children growing up today didn’t see all the things I saw thanks to having unfettered internet access throughout the early oughts.

Why not? I survived rotten.com and goatse – but then again, the internet didn’t have much in the way of fake news, hate speech or echo chambers…

I’d also rather young’uns now didn’t have to experience all the harassment I experienced at the hands of my own peers, newly empowered by that unfettered internet access.

On the other hand, the internet answered a lot of questions I was absolutely not going to ask my parents; when I see a vague term like “harmful content” I do have to wonder what genuinely educational resources on the wider internet—say, regarding art history or personal health—might end up age-gated because someone somewhere has decided they’re tantamount to ‘pornography.’

I’m only just the other side of 30, but Section 28 was still in effect for some of my school years. For those who don’t know, Section 28 was a law that prevented schools in England, Scotland, and Wales from doing anything that could be interpreted as “intentionally [promoting] homosexuality or [publishing] material with the intention of promoting homosexuality”. So, until the law was repealed in the early 2000’s, a lot of schools simply pretended LGBTQIA+ folks didn’t exist. The internet, for all of its faults, helped to fill that deafening silence for me.

A screenshot of a 3D model being used to pass the DIscord age verification system

(Image credit: PromptPirate on GitHub)

Even so, I remember there being content blocks back in my day, too, and I know I found more than a few ways around those. Indeed, if we take just Discord today, our James has found not one but two different ways to fool its face scans—though the platform may already be formulating a counter to these workarounds.

Shortly after issuing assurances that not all users will even have to undergo an age check, a since-edited support article revealed that some UK users “may be part of an experiment where your information will be processed by an age-assurance vendor, Persona.” Amid reports of folks easily fooling its primary third-party vendor’s age verification checks, Discord may have been seeking to diversify its defences.

Persona’s investors include Peter Thiel, co-founder of ICE’s premier surveillance provider, Palantir. Though Persona and Palantir are two totally separate companies that do not share either data or operations, that’s still a pretty grimy connection. Not least of all because earlier this week, the US Department of Homeland Security reportedly subpoenaed a number of major online platforms—including Discord, Reddit, Google, and Meta—in order to obtain the personal details of accountholders who had been critical of ICE or identified the locations of its agents. We don’t yet know if Discord complied, though we have reached out for comment.

EDMONTON, CANADA - APRIL 28: An image of a woman holding a cell phone in front of the Discord logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

(Image credit: Artur Widak/NurPhoto via Getty Images)

There is an even worse wrinkle in the Discord-Persona ‘experiment’: while Discord had previously said that data like age verification face scans would only be stored and processed on users’ own devices, those who ended up part of the Persona experiment may have their information “temporarily stored for up to 7 days, then deleted.”

Indeed, some security researchers are already claiming to have “found a Persona frontend exposed to the open internet on a US government-authorized server.”

All of that said, Persona is not part of Discord’s long-term strategy, with the platform telling Kotaku earlier this week that its dealings with the vendor were part of a “limited test” that has since been concluded. That leaves K-id’s on-device processing in effect, but even that doesn’t necessarily end the privacy nightmare. Data breaches usually leave platforms scrambling for user good will, but Discord seems all too happy to keep walking into rakes.

One could jump ship and shop around for a free Discord alternative as I recently did, but all of the platforms I tested will likely have to implement some sort of age assurance check if they haven’t already in order to continue serving users based in the UK in the future. That doesn’t mean I’ll be letting them scan my face any time soon; I may have to deploy Norman Reedus and his funky foetus before long as third-party age verification vendors have done little to earn my trust or a gander at my actual face.

Source: Age verification checks are now in force in the UK because of the Online Safety Act, but with the Discord fallout, it seems like one bad idea after another | PC Gamer

How shaming unethical brands makes companies improve their behavior

This article is riddled in huge assumptions about causality and the amplification that social media can offer, completely unhampered by any research. But the actual research that they do have interspersed in the article is interesting.

[…]Discovering that an ordinary purchase may be tied to exploitation or environmental damage creates a jolt of personal responsibility. In our research, we found that when environmental consequences are clearly linked to people’s own buying choices, many are willing to switch products—especially when credible alternatives exist.

But guilt is private. It nudges personal behavior. It does not automatically reshape systems. The shift happens when private discomfort becomes public voice.

Consumers are often also the first to make hidden environmental harms visible. They post evidence on social media. They question corporate claims. They compare sustainability promises with independent reporting. They organize petitions, boycotts and review campaigns. By shining a spotlight on the truth, the scrutiny shifts from shoppers to brands.

That shift matters because modern brands depend on trust. Reputation is an asset. When sustainability claims are publicly challenged, credibility is at risk. Research in organisational behaviourshows that firms respond quickly to threats to legitimacy. Reputational damage affects customer loyalty, investor confidence and regulatory attention.

[…]

When the gap between what companies say and what they do becomes visible, maintaining that gap becomes harder.

Our research explores how that visibility can be strengthened. The findings were clear. When environmental and social consequences are personalized and traceable, sustainability feels less distant. People see both their own role and the role of particular firms. That dual awareness encourages two responses: behavioral change driven by guilt and corporate accountability driven by shame.

Shame works because it is social. Brands care about how they are seen. When the negative environmental and social effects of supply chains can be publicly connected to named products, corporate narratives become contestable in real time.

[…]

Source: How shaming unethical brands makes companies improve their behavior

Discord’s First Age-Verification ‘Experiment’ Alarms Hackers: Supplier “Persona” not only leaky, but also uses IDs for various purposes not age related

Last week, Discord users reported seeing prompts to submit personal information to Persona, a third-party age-verification service. As Discord commits to universal age-verification, the new measures have come under intense scrutiny after previous security failures. Now a trio of hacktivists say they’ve successfully breached Persona, getting a closer look at how the company uses submitted biometrics. They say their findings raise alarms beyond the possibility of leaks.

According to The Rage, Persona’s front-end security left a lot to be desired. Worse, however, were investigative findings that suggested Persona’s surveillance of the users whose data it collected was way more sprawling than originally believed.

“It was initially meant to be a passive recon investigation,” writes vmfunc, a cybersecurity researcher and one of the hackers, “that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second.”

On top of finding it surprisingly easy to access data gathered by Persona, the research showed that faces and biometrics were not just being scanned for age verification, but flagged for suspicious behavior and bounced off watchlists as well. To some, particularly those who don’t worry about their face being deemed “suspicious,” this may not sound like an Orwellian level of intrusion, until you remember Persona’s full network.

Persona received $150 million in 2021 from the Founders Fund, a long-running tech investor group headed by Peter Thiel. Thiel’s main business, on top of palling around in Jeffery Epstein’s emails and waiting for the antichrist, is Palantir, an intentionally ominously-named data brokering service that is currently peddling user information to support ICE raids. The findings of vmfunc and co’s research doesn’t directly tether Persona and Discord’s operations to Palantir or Thiel, but it wouldn’t be conspiratorial to point out that all this data seems to be funnelling along similar slopes.

Trust but verify

Persona has confirmed the breach, CEO Rick Song corresponding and even thanking the hackers for flagging the security exploit. This has not, however, tempered concerns among those hacktivists about how the user information is ultimately being used.

“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” writes Christie Kim, chief operating officer at Persona, in an email regarding the security breach and speculation around Discord. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”

After the alarm was initially raised about Persona, Discord claimed its work with the Thiel-backed firm was only temporary, and that it didn’t have new contacts with it moving forward. It also promised user info was being wiped from servers within seven days of being gathered.

Source: Discord’s First Age-Verification ‘Experiment’ Alarms Hackers

Country that censors: criticism of prez by lawfare; books; reporters in the white house; etc Is Working on a Site to Help Europeans Bypass Content Bans on Hate Speech

The U.S. State Department is reportedly working on an online portal that would allow people in Europe and other regions to access content banned by their governments. The move comes at a time when conservative figures like Elon Musk and J.D. Vance have railed against European attempts to clamp down on hate speech, terrorist propaganda, and revenge porn.

Reuters reported Wednesday, citing unnamed sources, that the initiative is intended to fight censorship and could include a virtual private network (VPN) feature.

The portal would reportedly be hosted at Freedom.gov. The site currently displays a landing page featuring a small animation of Paul Revere on horseback above the words “Freedom is Coming.” Smaller text below reads, “Information is power. Reclaim your human right to free expression. Get Ready.”

[…]

Reuters reported that the portal was expected to launch at the conference, but was delayed.

“We don’t comment on draft laws, and that’s what it is,” European Commission Spokesperson Thomas Regnier said when asked about the portal during a press briefing today. “Let me say that the Commission does not block access to websites. It’s up to national authorities to do this kind of thing. If a website breaches EU law or international law, talking about sites which promote hate speech, for example, or have terrorist content, obviously that does not belong in Europe. That’s why we have a regulation on digital services, the DSA, which protects freedom of expression.”

[…]

Ironically, The Guardian reported today that DOGE cuts to the State Department and U.S. Agency for Global Media’s Internet Freedom program have effectively gutted the program.

The initiative funded grassroots tools to help people bypass government internet controls worldwide. It distributed over $500 million over the past decade but issued no funding in 2025, according to The Guardian.

Source: The US Is Working on a Site to Help Europeans Bypass Content Bans on Hate Speech: Report

Leaked Email Suggests Ring Plans To Expand ‘Search Party’ Surveillance Beyond Dogs, surprising? Not really.

Ring’s AI-powered “Search Party” feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced “first for finding dogs” and that the technology would eventually help “zero out crime in neighborhoods.” The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out “Familiar Faces,” a facial recognition tool that identifies friends and family on a user’s camera, and “Fire Watch,” an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.

Source: Leaked Email Suggests Ring Plans To Expand ‘Search Party’ Surveillance Beyond Dogs | Slashdot

Watch how capitalism breaks innovation: OpenAI hires OpenClaw AI agent developer Peter Steinberg

OpenClaw is a huge disruptor in the agentic AI space – it has an actual orchestrator, is super easy to implement and destroys many business models. You can bet that despite all the sounds, the open source repository will be laid to rest and all new development will go in to the closed OpenAI space so they can regain their competitive advantage and maybe actually make some money, despite the best efforts of megalomanic compulsive liar and general poor man’s baddie, Sam Altman.

So this move kills a real gamechanger and moves EU top talent to the US in one go. What great things money does for us. Not.

Peter Steinberger, creator of popular open-source artificial intelligence program OpenClaw, will be joining OpenAI Inc. to help bolster the ChatGPT developer’s product offerings.

“OpenClaw will live in a foundation as an open source project that OpenAI will continue to support,” OpenAI Chief Executive Officer Sam Altman wrote in a post on X Sunday, adding that Steinberger is “joining OpenAI to drive the next generation of personal agents.”

Steinberger wrote in a separate post on his website Saturday that he will be joining OpenAI to be “part of the frontier of AI research and development, and continue building.”

“It’s always been important to me that OpenClaw stays open source and given the freedom to flourish,” Steinberger wrote. “Ultimately, I felt OpenAI was the best place to continue pushing on my vision and expand its reach.”

OpenClaw, previously called Clawdbot and Moltbot, has garnered a cult following since launching in November for its ability to operate autonomously, clearing users’ inboxes, making restaurant reservations and checking in for flights, among other tasks. Users can also connect the tool to messaging apps such as WhatsApp and Slack and direct the agent through those platforms.

“My next mission is to build an agent that even my mum can use,” Steinberger wrote. “That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research.”

[…]

 

Source: OpenAI hires OpenClaw AI agent developer Peter Steinberg | Fortune

Discord will require a face scan or ID for full access next month

The creeps staring into your bedroom brigade is winning and age verification is being normalised by a group of goons who really really want to know every poop you take. It’s a dangerous and insanely bad idea, but fortunately people are starting to wise up.

Discord announced on Monday that it’s rolling out age verification on its platform globally starting next month, when it will automatically set all users’ accounts to a “teen-appropriate” experience unless they demonstrate that they’re adults.

“For most adults, age verification won’t be required, as Discord’s age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process,” Savannah Badalich, Discord’s global head of product policy, tells The Verge.

Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.

Direct messages and servers that are not age-restricted will continue to function normally, but users won’t be able to send messages or view content in an age-restricted server until they complete the age check process, even if it’s a server they were part of before age verification rolled out. Badalich says those servers will be “obfuscated” with a black screen until the user verifies they’re an adult. Users also won’t be able to join any new age-restricted servers without verifying their age.

Discord asking a user for age verification after opening a restricted server
Discord asking a user for age verification to unblur sensitive content
1/2Unverified users won’t be able to enter age-restricted servers. Image: Discord

Discord’s global age verification launch is part of a wave of similar moves at other online platforms, driven by an international legal push for age checks and stronger child safety measures. This is not the first time Discord has implemented some form of age verification, either. It initially rolled out age checks for users in the UK and Australia last year, which some users figured out how to circumvent using Death Stranding’s photo mode. Badalich says Discord “immediately fixed it after a week,” but expects users will continue finding creative ways to try getting around the age checks, adding that Discord will “try to bug bash as much as we possibly can.”

It’s not just teens trying to cheat the system who might attempt to dodge age checks. Adult users could avoid verifying, as well, due to concerns around data privacy, particularly if they don’t want to use an ID to verify their age. In October, one of Discord’s former third-party vendors suffered a data breach that exposed users’ age verification data, including images of government IDs.

If Discord’s age inference model can’t determine a user’s age, a government ID might still be required for age verification in its global rollout. According to Discord, to remove the new “teen-by-default” changes and limitations, “users can choose to use facial age estimation or submit a form of identification to [Discord’s] vendor partners, with more options coming in the future.”

The first option uses AI to analyze a user’s video selfie, which Discord says never leaves the user’s device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents “are deleted quickly — in most cases, immediately after age confirmation.”

A Discord user profile showing a “teen” age group and age verification options
Users can view and update their age group from their profile. Image: Discord

Badalich also says after the October data breach, Discord “immediately stopped doing any sort of age verification flows with that vendor” and is now using a different third-party vendor. She adds, “We’re not doing biometric scanning [or] facial recognition. We’re doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.”

“A majority of people are not going to see a change in their experience.”

Badalich goes on to explain that the addition of age assurance will mainly impact adult content: “A majority of people on Discord are not necessarily looking at explicit or graphic content. When we say that, we’re really talking about things that are truly adult content [and] age inappropriate for a teen. So, the way that it will work is a majority of people are not going to see a change in their experience.”

Even so, there’s still a risk that some users will leave Discord as a result of the age verification rollout. “We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like,” Badalich says. “We’ll find other ways to bring users back.”

Source: Discord will require a face scan or ID for full access next month | The Verge

If you want to look at more people blowing up about age verification you can try this Slashdot thread: Discord Will Require a Face Scan or ID for Full Access Next Month

How to Disable Ring’s Creepy ‘Search Party’ Feature – But if you bought a Ring, you probably don’t mind blanket corpo and govt surveillance anyway I guess.

If you tuned into Super Bowl LX on Sunday, you may have caught Ring’s big ad of the night: The company tried to tap into us dog owners’ collective fear of losing our pets, demonstrating how its new “Search Party” feature could reunite missing dogs with its owners. Ring probably thought audiences would love the feature, with existing users happy to know Search Party exists, and new customers looking to buy one of their doorbells to help find lost dogs in the neighborhood.

Of course, that’s not what happened at all. Rather than evoke heartwarming feelings, the ad scared the shit out of many of us who caught it. That’s due to how the feature itself works: Search Party uses AI to identify pets that run in its field of vision. But it’s not just your camera doing this: The feature pools together all of the Ring cameras that have Search Party enabled to look for your lost dog. In effect, it turns all these individual devices into a Ring network, or, perhaps in harsher terms, a surveillance state. It does so in pursuit of a noble goal, sure, but at what cost?

The reactions I saw online ranged from shock to anger. Some were surprised to learn that Ring cameras could even do this, seeing as you might assume your Ring doorbell is, well, yours. Others were furious, lashing out at anyone who thinks Search Party is a good idea, or that the feature isn’t the beginning of a very slippery slope. My favorite take was one comparing Search Party to Batman’s cellphone network surveillance system from The Dark Knight, which famously compromised morals and ethics in the name of catching the bad guy.

According to Ring, Search Party is a perfectly safe and wholesome way to look for lost dogs in the area. The company’s FAQs explain that users can opt-out of the feature at any time, and only Ring doorbells in the area around the home that started the current Search Party will look for the dog. In addition, Ring says the feature works based on saved videos, so Ring doorbells without a subscription and a saved video history won’t be able to participate. (Though I’m not sure the fact that the feature works with saved videos assuages any fears on my end.)

I am not pro-missing dogs. But I am pro-privacy. At the risk of sounding alarmist, Search Party really does seem like a slippery slope. Today, the neighborhood is banding together to find Mrs. Smith’s missing goldendoodle; tomorrow, they’re looking for a “suspicious person.” Innocent until proven guilty, unless caught on your neighbor’s Ring camera.

Can law enforcement request Search Party data?

Here’s the big question regarding Search Party and its slippery slope: Can law enforcement—including local police, FBI, or ICE—request saved videos from Ring cameras participating in Search Party in order to track down people, not pets?

You won’t be surprised to learn that that wasn’t answered by Ring’s Super Bowl ad, nor is it part of the official Search Party FAQs. However, we do know that, as of October 2025, Ring partnered with both Flock Safety as well as Axon. Axon makes and sells equipment for law enforcement, like tasers and body cameras, while Flock Safety is a security company that offers services like license plate recognition and video surveillance. These partnerships allow law enforcement to post requests for Ring footage directly to the Ring app. Ring users in the vicinity of the request have the choice to either share that footage or ignore the petition. Flock Safety says that users who do choose to share footage remain private.

Of course, law enforcement isn’t always going to ask for volunteers. According to Ring’s law enforcement guidelines, the company will comply with “valid and binding search warrants.” That’s not surprising, of course. But the company does note an important distinction in what it will share: Ring will share “non-content” data in response to both subpoenas and warrants, including a user’s name, home address, email address, billing info, date they made the account, purchase history, and service usage data. The company says it will not share “content,” meaning the data you store in your account, like videos and recordings of service calls, for subpoenas, only warrants.

Ring also says it will tell you if it shares your data with law enforcement, unless it is barred from doing so, or it’s clear your Ring data breaks the law. This applies for both standard data requests, as well as “emergency” requests.

Based on its current language, it seems that Ring would give up the footage used in Search Party to law enforcement, assuming they present a valid warrant. The thing is, it’s not clear whether Search Party has any actual impact on that data: For example, imagine a dog runs in front of your Ring doorbell, and the footage is saved to your history. Now, a valid warrant comes through requesting your footage. Whether you have Search Party enabled or disabled, Ring may share that footage with law enforcement—the feature itself had no impact on whether your doorbell saved the footage. The difference would be whether law enforcement has access to the identification data within the footage: Can they see that Ring thinks that dog is, in fact, Mrs. Smith’s goldendoodle, or do they simply see a video of a fluffy pup running past your house? If so, that would be your slippery slope indeed: If law enforcement could obtain your footage with facial recognition data of the suspect they’re looking for, we’d be in particularly dangerous territory.

I’ve reached out to Ring for comment on this side of Search Party, and I hope to hear back to provide a fuller answer to this question.

How to opt-out of Search Party on your Ring cameras

If you’d rather not bother with the feature at all, Ring says it’s easy enough to turn off. To start, open the Ring app, tap the hamburger menu, then choose “Control Center.” Here, choose “Search Party,” then choose the “blue Pet icon” next to each of your cameras for “Search for Lost Pets.”

To be honest, if I had a Ring camera, I’d go one step further and delete my saved videos. Law enforcement can’t obtain what I don’t save. If you want to delete these clips from your Ring account, head to the hamburger menu in the app, tap “History,” choose the “pencil icon,” then tap “Delete All” to wipe your entire history.

Source: How to Disable Ring’s ‘Search Party’ Feature | Lifehacker

Team USA, Vance Booes Heard through anti-boo technology deployed in Frosty Reception at Italy’s Winter Olympics.

After using this alternate reality type of tool in the Eurovision Song Contest to great shame, IOC organisers tried to lie to the public but this time on a global scale. Unfortunately all the live commentators were talking about the booing whilst it sounded like cheering, until JD Vance appeared and the technology was unable to compensate any longer. There are now Americans who think their local news channels are censoring for them. Why do the organisers feel the need to lie to the public about the reception their audience is giving? It’s patronising and dishonest.

[…] In an unmistakable sign of Europe’s rapidly dimming view on America, the U.S. delegation entered the San Siro stadium here on Friday night to a chorus of boos and disapproving whistles from the international crowd of more than 65,000. The jeering only intensified when Vice President JD Vance appeared on the big screen during Team USA’s arrival. 

The only other team to receive similar treatment was Israel.

Olympic organizers had braced for the possibility of anti-American sentiment inside the stadium. Small protests had already cropped up on the streets of Milan against the planned presence of U.S. Immigration and Customs Enforcement agents in the city. Asked before the Games on how the Americans might be received, IOC president Kirsty Coventry said she hoped that the occasion would be “seen by everyone as an opportunity to be respectful.”

[…]

Friday’s ceremony wasn’t, however, an event that brought every Olympic athlete together. 

For the first time, the official curtain-raising was held across four disparate venues, from the stadium on the edge of Milan to the ski town of Cortina in the Dolomite mountains to smaller sites in Livigno and Predazzo. That meant only part of the 232-strong U.S. delegation heard the Milanese reaction.

[…]

And if anyone thought that this might be a sign of Italy’s distaste for North America at large, the locals made it clear that their beef was specifically with the U.S.

The Italians reserved some of the loudest cheers of the night for Mexico and Canada.

[…]

Source: Team USA, Vance Booed in Frosty Reception at Italy’s Winter Olympics

Commission trials European open source communications software as a backup for Teams (not a replacement)

All this talk of digital self sufficiency, data supremacy, etc and the EU will continue to feed the hand that strangles it, whilst not paying a cent to EU companies that could build the same (and better) functionalities.

The European Commission is trialling using European open source software to run its internal communications, a spokesperson confirmed to Euractiv.

The move comes at a time of growing concern within European administrations over their heavy dependency on US software for day-to-day work amid increasingly unreliable transatlantic relations.

“As part of our efforts to use more sovereign digital solutions, the European Commission is preparing an internal communication solution based on the Matrix protocol,” the spokesperson told Euractiv.

Matrix is an open source, community-developed messaging protocol shepherded by a non-profit that’s headquartered in London. It’s already widely used for public messengers across Europe, with the French government, German healthcare providers and European armed forces all using tools built on the protocol.

Sovereign backup 

The Commission is looking into using Matrix as a “complement and backup solution” to existing internal communications software, the spokesperson said.

That means there are no plans for a Matrix-based solution to replace Microsoft Teams, which is currently widely found on the Commission’s computers, according to remarks by an EU official at a conference in October.

A different open source tool – namely the Signal messaging app, which is also a favourite with journalists – is fulfilling the backup role at present but the software wasn’t flexible enough for a large organisation like the Commission, the official also said.

The Commission is also eyeing another use case for the Matrix-based comms tool: It could be used to connect to other Union bodies in the future, which are currently lacking a common tool to communicate securely.

[…]

Source: Commission trials European open source communications software | Euractiv

Google Pixel Bug Turns Microphone on for Incoming Callers Leaving Voicemail

[…] Called “Take a Message,” the buggy feature was released last year and is supposed to automatically transcribe voicemails as they’re coming in, as well as detect and mark spam calls. Unfortunately, according to reports from multiple users on Reddit (as initially spotted by 9to5Google), the feature has started turning on the microphone while taking voicemails, allowing whoever is leaving you a voicemail to hear you.

[…]

The issue has been reported affecting Pixel devices ranging from the Pixel 4 to the Pixel 10, and on a recent support page, Google’s finally acknowledging it. However, the company’s action might not be enough, depending on how cautious you want to be.

According to Community Manager Siri Tejaswini, the company has “investigated this issue,” and has confirmed it “affects a very small subset of Pixel 4 and 5 devices under very specific and rare circumstances.” The post doesn’t go any further on the how and why of the diagnosis, but says that Google is now disabling Take a Message and “next-gen Call Screen features” on these devices.

[…]

While it’s encouraging that Google is taking action on the Take a Message bug, the company only seems to be acknowledging it for Pixel 4 and Pixel 5 models, at least for now. I’ve asked Google whether owners of other Pixel models should be worried, as user reports seem split on this. Still, because some have mentioned an issue with even the most up-to-date Pixel phone, if you want to practice your own abundance of caution, it might be worth disabling Take a Message on your device, regardless of its model number.

To do this, open your Phone app, then tap the three-lined menu icon at the top-left of the page. Navigate to Settings > Call Assist > Take a Message, and toggle the feature off.

Source: This Pixel Bug Leaked Audio to Incoming Callers, and Google’s Fix Might Not Be Enough | Lifehacker

Apple buys creepy Israeli spy startup Q.ai for $2b in 2nd largest acquisition in it’s history

Apple, Meta, and Google are locked in a fierce battle to lead the next wave of AI, and they’ve recently increased their focus on hardware. With its latest acquisition of the AI startup Q.ai, Apple aims to gain an edge, particularly in the audio sector.

​As first reported by Reuters, Apple has acquired Q.ai, an Israeli startup specializing in imaging and machine learning, particularly technologies that enable devices to interpret whispered speech and enhance audio in noisy environments. Apple has been adding new AI features to its AirPods, including the live translation capability introduced last year.

The company has also developed technology that detects subtle facial muscle activity, which could help the tech giant enhance the Vision Pro headset.

The Financial Times reported that the deal is valued at nearly $2 billion, making it Apple’s second-largest acquisition to date, after buying Beats Electronics for $3 billion in 2014.

​Notably, this is the second time CEO Aviad Maizels has sold a company to Apple. In 2013, he sold PrimeSense, a 3D-sensing company that played a key role in Apple’s transition from fingerprint sensors to facial recognition on iPhones.

Q.ai launched in 2022 and is backed by Kleiner Perkins, Gradient Ventures, and others. ​Its founding team, including Maizels and co-founders Yonatan Wexler and Avi Barliya, will join Apple as part of the acquisition.

[…]

Source: Apple buys Israeli startup Q.ai as the AI race heats up | TechCrunch

ICE takes aim at data held by advertising and tech firms

Let us not forget that the reason Nazi Germany was so great at exporting Jews from the Netherlands was for a large part because of the great databases the Netherlands kept at that time containing religious and ethnic information on its’ population.

It’s not enough to have its agents in streets and schools; ICE now wants to see what data online ads already collect about you. The US Immigration and Customs Enforcement last week issued a Request for Information (RFI) asking data and ad tech brokers how they could help in its mission.

The RFI is not a solicitation for bids. Rather it represents an attempt to conduct market research into the spectrum of data – personal, financial, location, health, and so on – that ICE investigators can source from technology and advertising companies.

“[T]he Government is seeking to understand the current state of Ad Tech compliant and location data services available to federal investigative and operational entities, considering regulatory constraints and privacy expectations of support investigations activities,” the RFI explains.

Issued on Friday, January 23, 2026, one day prior to the shooting of VA nurse Alex Pretti by a federal immigration agent, two weeks after the shooting of Renée Good, and three weeks after the shooting of Keith Porter Jr, the RFI lands amid growing disapproval of ICE tactics and mounting pressure to withhold funding for the agency.

ICE did not immediately respond to a request to elaborate on how it might use ad tech data and to share whether any companies have responded to its invitation.

The RFI follows a similar solicitation published last October for a contractor capable of providing ICE with open source intelligence and social media information to assist the ICE Enforcement and Removal Operations (ERO) directorate’s Targeting Operations Division – tasked with finding and removing “aliens that pose a threat to public safety or national security.”

[…]

Tom Bowman, policy counsel with the Center for Democracy & Technology’s (CDT) Security & Surveillance Project, told The Register in a phone interview that ICE is attempting to rebrand surveillance as a commercial transaction.

“But that doesn’t make the surveillance any less intrusive or any less constitutionally suspect,” said Bowman. “This inquiry specifically underscores what really is a long-standing problem – that government agencies have been able to sidestep Fourth Amendment protections by purchasing data that would otherwise need a warrant to collect.”

The data derived from ad tech and various technology businesses, said Bowman, can reveal intimate details about people’s lives, including visits to medical facilities and places of worship.

[…]

“Ad tech compliance regimes were never designed to protect people from government surveillance or coercive enforcement,” he said. “Ad tech data is often collected via consent that is meaningless. The data flows are opaque. And then these types of downstream uses are really difficult to control.”

Bowman argues that while there’s been a broad failure to meaningfully regulate data brokers, legislative solutions are possible.

[…]

Source: ICE takes aim at data held by advertising and tech firms • The Register

Following Apple, now Google to pay $68m to settle lawsuit claiming it recorded and sold private conversations

Google has agreed to pay $68m (£51m) to settle a lawsuit claiming it secretly listened to people’s private conversations through their phones.

Users accused Google Assistant – a virtual assistant present on many Android devices – of recording private conversations after it was inadvertently triggered on their devices.

They claimed the recordings were then shared with advertisers in order to send them targeted advertising.

The BBC has contacted Google for comment. But in a filing seeking to settle the case, it denied wrongdoing and said it was seeking to avoid litigation.

Google Assistant is designed to wait in standby mode until it hears a particular phrase – typically “Hey Google” – which activates it.

The phone then records what it hears and sends the recording to Google’s servers where it can be analysed.

[…]

The claim has been brought as a class action lawsuit rather than an individual case – meaning if it is approved, the money will be paid out across many different claimants.

Those eligible for a payout will have owned Google devices dating back to May 2016.

But lawyers for the plaintiffs may ask for up to one-third of the settlement – amounting to about $22m in legal fees.

It follows a similar case in January where Apple agreed to pay $95m to settle a case alleging some of its devices were listening to people through its voice-activated assistant Siri without their permission.

The tech firm also denied any wrongdoing, as well as claims that it “recorded, disclosed to third parties, or failed to delete, conversations recorded as the result of a Siri activation” without consent.

Source: Google to pay $68m to settle lawsuit claiming it recorded private conversations

Microsoft will give the FBI your BitLocker keys if asked. Can do so because of cloud accounts.

Great target for hackers then, the server with unencrypted bitlocker keys on it.

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

Source: Microsoft gave FBI BitLocker keys, raising privacy fears | Windows Central

Threads Is Now Clearly More Popular Than X in Mobile App Form

Matt Damon has claimed that Netflix pushes directors to reiterate the plot for viewers who are watching while on their phones.

The actor has just released new action film The Rip on the streaming platform, which sees him reunite with frequent collaborator Ben Affleck.

During an appearance on the Joe Rogan Experience podcast alongside his co-star, Damon spoke about collaborating with Netflix, saying they want bigger action earlier in such films, and push for the plot to be repeated to accommodate attention spans.

“The standard way to make an action movie that we learned was, you usually have three set pieces,” he said. “One in the first act, one in the second, one in the third… You spend most of your money on that one in the third act. That’s your finale.

“And now they’re like, ‘Can we get a big one in the first five minutes? We want people to stay tuned in. And it wouldn’t be terrible if you reiterated the plot three or four times in the dialogue because people are on their phones while they’re watching.’”

Affleck went on to praise Netflix series Adolescence, which became a huge success last year, and the fact that it “didn’t do any of that shit”.

[…]

Source: Threads Is Now Clearly More Popular Than X (in Mobile App Form), Report Says

Insane that people are still on X. Numbers for both platforms will be inflated due to embeds on web.

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet

A couple months ago, YouTuber Benn Jordan “found vulnerabilities in some of Flock’s license plate reader cameras,” reports 404 Media’s Jason Koebler. “He reached out to me to tell me he had learned that some of Flock’s Condor cameras were left live-streaming to the open internet.”

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. (“On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet… Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.”) Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces… The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon “GainSec” Gaines, who recently found numerous vulnerabilities in several other models of Flock’s automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler’s own YouTube channel, while Jordan released a video of his own about the experience. titled “We Hacked Flock Safety Cameras in under 30 Seconds.” (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled “The Flock Camera Leak is Like Netflix for Stalkers” which includes footage he says was “completely accessible at the time Flock Safety was telling cities that the devices are secure after they’re deployed.”

The video decries cities “too lazy to conduct their own security audit or research the efficacy versus risk,” but also calls weak security “an industry-wide problem.” Jordan explains in the video how he “very easily found the administration interfaces for dozens of Flock safety cameras…” — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see…. Making any modification to the cameras is illegal, so I didn’t do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system…

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don’t view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I’ve been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety’s response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety’s security policies. So, I formally and publicly offered to personally fund security research into Flock Safety’s deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn’t get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock’s official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

“Might as well. It’s my tax dollars that paid for it.”

” ‘Flock is committed to continuously improving security…'”

Source: What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet | Slashdot

For more on why Flock cameras are problematic, read here

CD Project Takes down VR Mod for Cyberpunk – because it was paid

Yes, the TOS don’t allow commercial mods, which has plusses and minusses. So, yes, technically CD Project Red is in the right. However, it takes a lot of work and time to do some of these mods and if you want to get paid for it that is your right. Just as much as it is your right to not buy it if you don’t like it. Whatever.

There are loads of paid external services that run on top of Amazon, Paypal, Ebay, Discord, most AI products are built on top of OpenAI, etc. It’s a valid (if risky, due to the dependency) way to create value for people.

It seems to me that the TOS are overextended though. How can you legally determine what someone will do with a product they bought? US law is pretty bizarre in that respect, just as companies can get away with not allowing reverse engineering and lock people into buying hugely overpriced repairs and replacement parts only from them. Maybe look at China to see how this kind of law kills innovation and look at monopolies to see how this drives costs up and removes choice for consumers.

[…] Now that the dust has settled, I’m even more sorry to announce that we are leaving behind an adventure that so many of you deeply loved and enjoyed. CD PROJEKT S.A. decided that they would follow in Take-Two Interactive Software’s steps and issued a DMCA notice against me for the removal of the Cyberpunk 2077 VR mod.

At least they were a little more open about it, and I could get a reply both from their legal department and from the VP of business development. But in the end it amounted to the same iron-clad corpo logic: every little action that a company takes is in the name of money, but everything that modders do must be absolutely for free.

As usual they stretch the concept of “derivative work” until it’s paper-thin, as though a system that allows visualizing 40+ games in fully immersive 3D VR was somehow built making use of their intellectual property. And as usual they give absolutely zero f***s about how playing their game in VR made people happy, and they cannot just be grateful about the extra copies of the title they sold because of that—without ever having to pour money into producing an official conversion (no, they’re not planning to release their own VR port, in case you were wondering). […]

Source: Another one bites the dust | Patreon

Signal Founder Creates Truly Private GPT: Confer

When you use an AI service, you’re handing over your thoughts in plaintext. The operator stores them, trains on them, and–inevitably–will monetize them. You get a response; they get everything.

Confer works differently. In the previous post, we described how Confer encrypts your chat history with keys that never leave your devices. The remaining piece to consider is inference—the moment your prompt reaches an LLM and a response comes back.

Traditionally, end-to-end encryption works when the endpoints are devices under the control of a conversation’s participants. However, AI inference requires a server with GPUs to be an endpoint in the conversation. Someone has to run that server, but we want to prevent the people who are running it (us) from seeing prompts or the responses.

Confidential computing

This is the domain of confidential computing. Confidential computing uses hardware-enforced isolation to run code in a Trusted Execution Environment (TEE). The host machine provides CPU, memory, and power, but cannot access the TEE’s memory or execution state.

LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

But this raises an obvious concern: even if we have encrypted pipes in and out of an encrypted environment, it really matters what is running inside that environment. The client needs assurance that the code running is actually doing what it claims.

[…]

Source: Private inference | Confer Blog

Europe is Rediscovering the Virtues of Cash

After spending years pushing digital payments to combat tax evasion and money laundering, European Union ministers decided in December to ban businesses from refusing cash. The reversal comes as 12% of European businesses flatly refused cash in 2024, up from 4% three years earlier.

Over one in three cinemas in the Netherlands no longer accept notes and coins. Cash usage across the euro area dropped from 79% of in-person transactions in 2016 to just 52% in 2024. Sweden leads the digital shift where 90% of purchases now happen digitally and cash represents under 1% of GDP compared to 22% in Japan.

The policy change stems from concerns about financial inclusion for elderly and poor populations who struggle with digital systems. Resilience worries also drove the decision after Spaniards facing nationwide power cuts last spring found themselves unable to buy food. European officials worry about dependence on American payment giants Visa and MasterCard. The EU now recommends citizens store enough cash to survive a week without electricity or internet access.

Source: Europe is Rediscovering the Virtues of Cash | Slashdot

Also, when under digital attack it’s useful to be able to get at your money. This is not theoretical, bank attacks by the Russians regularly take down Finnish payment methods.

EU seeks feedback on Open Digital Ecosystems

It’s important you give your feedback on this:

The European Open Digital Ecosystem Strategy will set out:

  • a strategic approach to the open source sector in the EU that addresses the importance of open source as a crucial contribution to EU technological sovereignty, security and competitiveness
  • a strategic and operational framework to strengthen the use, development and reuse of open digital assets within the Commission, building on the results achieved under the 2020-2023 Commission Open Source Software Strategy.

Source: Call for evidence: European Open Digital Ecosystems

The US muscled the EU into adopting Article 6 of the EU Copyright Directive, preventing reverse engineering in return for free trade. By implementing tariffs, the US broke that agreement. Theres no reason not to delete Article 6 of the EUCD, and all the other laws that prevent European companies from jailbreaking iPhones and making their own App Stores (minus Apples 30% commission), as well as ad-blockers for Facebook and Instagrams apps (which would zero out EU revenue for Meta), and, of course, jailbreaking tools for Xboxes, Teslas, and every make and model of every American car, so European companies could offer service, parts, apps, and add-ons for them. Video games need to be able to be run after official support shuts down and servers close down. We need to get out from under the high tech lock-in scams, we need to get rid of e-waste. We need to get back to ownership of the products we buy. This is an important part of digital sovereignity and in an uncertain world with unreliable partners, the importance of being able to follow EU values needs to be underscored. FOSS and allowing FOSS to develop is an important lynchpin of this.