The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected. In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.

End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.

Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.

The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected.

In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list “clearly fell within the scope” of the Irish Council for Civil Liberties’ request. 

If you’re not familiar with the reference, we’ll get you up to speed.

22 Short Films About Springfield is an episode of “The Simpsons” that originally aired in 1996. One particular “film” has become an internet meme legend: the one dealing with Principal Seymour Skinner’s attempt to impress his boss (Superintendent Chalmers) with a home-cooked meal.

One thing leads to another (and by one thing to another, I mean a fire in the kitchen as Skinner attempts to portray fast-food burgers as “steamed hams” and not the “steamed clams” promised earlier). That culminates in this spectacular cover-up by Principal Skinner when the superintendent asks about the extremely apparent fire occurring in the kitchen:

Principal Skinner: Oh well, that was wonderful. A good time was had by all. I’m pooped.

Chalmers: Yes. I should be– Good Lord! What is happening in there?

Principal Skinner: Aurora borealis.

Chalmers: Uh- Aurora borealis. At this time of year, at this time of day, in this part of the country, localized entirely within your kitchen?

Principal Skinner: Yes.

Chalmers [meekly]: May I see it?

Principal Skinner: No.

That is what happened here. Everyone opposing the EU Commission’s CSAM (i.e., “chat control”) efforts trotted out their experts, making it clearly apparent who was saying what and what their relevant expertise was. The EU insisted it had its own battery of experts. The ICCL said: “May we see them?”

The EU Commission: No.

Not good enough, said the ICCL. But that’s what a rights advocate would be expected to say. What’s less expected is the EU Commission’s ombudsman declaring the ICCL had the right to see this particularly specific aurora borealis.

After the Commission acknowledged to the EU Ombudsman that it, in fact, had such a list, but failed to disclose its existence to Dr Kris Shrishak, the Ombudsman held the Commission’s behaviour constituted “maladministration”.  

The Ombudsman held: “[t]he Commission did not identify the list of experts as falling within the scope of the complainant’s request. This means that the complainant did not have the opportunity to challenge (the reasons for) the institution’s refusal to disclose the document. This constitutes maladministration.” 

As the report further notes, the only existing documentation of this supposed consultation with experts has been reduced to a single self-serving document issued by the EU Commission. Any objections or interjections were added/subtracted as preferred by the EU Commission before presenting a “final” version that served its preferences. Any supporting documentation, including comments from participating stakeholders, were sent to the digital shredder.

As concerns the EUIF meetings, the Commission representatives explained that three online technical workshops took place in 2020. During the first workshop, academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion. After this workshop, a first draft of the ‘outcome document’ was produced, which summarises the input given orally by the participants and references a number of relevant documents. This first draft was shared with the participants via an online file sharing service and some participants provided written comments. Other participants commented orally on the first draft during the second workshop. Those contributions were then added to the final version of the ‘outcome document’ that was presented during the third and final workshop for the participants’ endorsement. This ‘outcome document’ is the only document that was produced in relation to the substance of these workshops. It was subsequently shared with the EUIF. One year later, it was used as supporting information to the impact assessment report.

In other words, the EU took what it liked and included it. The rest of it disappeared from the permanent record, supposedly because the EU Commission routinely purges any email communications more than two years old. This is obviously ridiculous in this context, considering this particular piece of legislation has been under discussion for far longer than that.

But, in the end, the EU Commission wins because it’s the larger bureaucracy. The ombudsman refused to issue a recommendation. Instead, it instructs the Commission to treat the ICCL’s request as “new” and perform another search for documents. “Swiftly.” Great, as far as that goes. But it doesn’t go far. The ombudsman also says it believes the EU Commission when it says only its version of the EUIF report survived the periodic document cull.

In the end, all that survives is this: the EU consulted with affected entities. It asked them to comment on the proposal. It folded those comments into its presentation. It likely presented only comments that supported its efforts. Dissenting opinions were auto-culled by EU Commission email protocols. It never sought further input, despite having passed the two-year mark without having converted the proposal into law. All that’s left, the ombudsman says, is likely a one-sided version of the Commission’s proposal. And if the ICCL doesn’t like it, well… it will have to find some other way to argue with the “experts” the Commission either ignored or auto-deleted. The government wins, even without winning arguments. Go figure.

Source: Steamed Hams, Except It’s The EU Commission’s Alleged CSAM Regulation ‘Experts’ | Techdirt

Decoupling for IT Security (=privacy)

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents.

The risks we face from the cloud today are not an accident. For Google to show you your work emails, it has to store many copies across many servers. Even if they’re stored in encrypted form, Google must decrypt them to display your inbox on a webpage. When Zoom coordinates a call, its servers receive and then retransmit the video and audio of all the participants, learning who’s talking and what’s said. For Apple to analyze and share your photo album, it must be able to access your photos.

Hacks of cloud services happen so often that it’s hard to keep up. Breaches can be so large as to affect nearly every person in the country, as in the Equifax breach of 2017, or a large fraction of the Fortune 500 and the U.S. government, as in the SolarWinds breach of 2019-20.

It’s not just attackers we have to worry about. Some companies use their access—benefiting from weak laws, complex software, and lax oversight—to mine and sell our data.

[…]

The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

[…]

Cryptographer David Chaum first applied the decoupling approach in security protocols for anonymity and digital cash in the 1980s, long before the advent of online banking or cryptocurrencies. Chaum asked: how can a bank or a network service provider provide a service to its users without spying on them while doing so?

Chaum’s ideas included sending Internet traffic through multiple servers run by different organizations and divvying up the data so that a breach of any one node reveals minimal information about users or usage. Although these ideas have been influential, they have found only niche uses, such as in the popular Tor browser.

Trust, but Don’t Identify

The decoupling principle can protect the privacy of data in motion, such as financial transactions and Web browsing patterns that currently are wide open to vendors, banks, websites, and Internet Service Providers (ISPs).

Illustration of a process.

STORYTK

1. Barath orders Bruce’s audiobook from Audible. 2. His bank does not know what he is buying, but it guarantees the payment. 3. A third party decrypts the order details but does not know who placed the order. 4. Audible delivers the audiobook and receives the payment.

DECOUPLED E-COMMERCE: By inserting an independent verifier between the bank and the seller and by blinding the buyer’s identity from the verifier, the seller and the verifier cannot identify the buyer, and the bank cannot identify the product purchased. But all parties can trust that the signed payment is valid.

Illustration of a process

STORYTK

1. Bruce’s browser sends a doubly encrypted request for the IP address of sigcomm.org. 2. A third-party proxy server decrypts one layer and passes on the request, replacing Bruce’s identity with an anonymous ID. 3. An Oblivious DNS server decrypts the request, looks up the IP address, and sends it back in an encrypted reply. 4. The proxy server forwards the encrypted reply to Bruce’s browser. 5. Bruce’s browser decrypts the response to obtain the IP address of sigcomm.org.

DECOUPLED WEB BROWSING: ISPs can track which websites their users visit because requests to the Domain Name System (DNS), which converts domain names to IP addresses, are unencrypted. A new protocol called Oblivious DNS can protect users’ browsing requests from third parties. Each name-resolution request is encrypted twice and then sent to an intermediary (a “proxy”) that strips out the user’s IP address and decrypts the outer layer before passing the request to a domain name server, which then decrypts the actual request. Neither the ISP nor any other computer along the way can see what name is being queried. The Oblivious resolver has the key needed to decrypt the request but no information about who placed it. The resolver encrypts its reply so that only the user can read it.

Similar methods have been extended beyond DNS to multiparty-relay protocols that protect the privacy of all Web browsing through free services such as Tor and subscription services such as INVISV Relay and Apple’s iCloud Private Relay.

[…]

Meetings that were once held in a private conference room are now happening in the cloud, and third parties like Zoom see it all: who, what, when, where. There’s no reason a videoconferencing company has to learn such sensitive information about every organization it provides services to. But that’s the way it works today, and we’ve all become used to it.

There are multiple threats to the security of that Zoom call. A Zoom employee could go rogue and snoop on calls. Zoom could spy on calls of other companies or harvest and sell user data to data brokers. It could use your personal data to train its AI models. And even if Zoom and all its employees are completely trustworthy, the risk of Zoom getting breached is omnipresent. Whatever Zoom can do with your data in motion, a hacker can do to that same data in a breach. Decoupling data in motion could address those threats.

[…]

Most storage and database providers started encrypting data on disk years ago, but that’s not enough to ensure security. In most cases, the data is decrypted every time it is read from disk. A hacker or malicious insider silently snooping at the cloud provider could thus intercept your data despite it having been encrypted.

Cloud-storage companies have at various times harvested user data for AI training or to sell targeted ads. Some hoard it and offer paid access back to us or just sell it wholesale to data brokers. Even the best corporate stewards of our data are getting into the advertising game, and the decade-old feudal model of security—where a single company provides users with hardware, software, and a variety of local and cloud services—is breaking down.

Decoupling can help us retain the benefits of cloud storage while keeping our data secure. As with data in motion, the risks begin with access the provider has to raw data (or that hackers gain in a breach). End-to-end encryption, with the end user holding the keys, ensures that the cloud provider can’t independently decrypt data from disk.

[…]

Modern protocols for decoupled data storage, like Tim Berners-Lee’s Solid, provide this sort of security. Solid is a protocol for distributed personal data stores, called pods. By giving users control over both where their pod is located and who has access to the data within it—at a fine-grained level—Solid ensures that data is under user control even if the hosting provider or app developer goes rogue or has a breach. In this model, users and organizations can manage their own risk as they see fit, sharing only the data necessary for each particular use.

[…]

the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

With TEEs in the cloud, the final piece of the decoupling puzzle drops into place. An organization can keep and share its data securely at rest, move it securely in motion, and decrypt and analyze it in a TEE such that the cloud provider doesn’t have access. Once the computation is done, the results can be reencrypted and shipped off to storage. CPU-based TEEs are now widely available among cloud providers, and soon GPU-based TEEs—useful for AI applications—will be common as well.

[…]

Decoupling also allows us to look at security more holistically. For example, we can dispense with the distinction between security and privacy. Historically, privacy meant freedom from observation, usually for an individual person. Security, on the other hand, was about keeping an organization’s data safe and preventing an adversary from doing bad things to its resources or infrastructure.

There are still rare instances where security and privacy differ, but organizations and individuals are now using the same cloud services and facing similar threats. Security and privacy have converged, and we can usefully think about them together as we apply decoupling.

[…]

Decoupling isn’t a panacea. There will always be new, clever side-channel attacks. And most decoupling solutions assume a degree of noncollusion between independent companies or organizations. But that noncollusion is already an implicit assumption today: we trust that Google and Advanced Micro Devices will not conspire to break the security of the TEEs they deploy, for example, because the reputational harm from being found out would hurt their businesses. The primary risk, real but also often overstated, is if a government secretly compels companies to introduce backdoors into their systems. In an age of international cloud services, this would be hard to conceal and would cause irreparable harm.

[…]

Imagine that individuals and organizations held their credit data in cloud-hosted repositories that enable fine-grained encryption and access control. Applying for a loan could then take advantage of all three modes of decoupling. First, the user could employ Solid or a similar technology to grant access to Equifax and a bank only for the specific loan application. Second, the communications to and from secure enclaves in the cloud could be decoupled and secured to conceal who is requesting the credit analysis and the identity of the loan applicant. Third, computations by a credit-analysis algorithm could run in a TEE. The user could use an external auditor to confirm that only that specific algorithm was run. The credit-scoring algorithm might be proprietary, and that’s fine: in this approach, Equifax doesn’t need to reveal it to the user, just as the user doesn’t need to give Equifax access to unencrypted data outside of a TEE.

Building this is easier said than done, of course. But it’s practical today, using widely available technologies. The barriers are more economic than technical.

[…]

One of the challenges of trying to regulate tech is that industry incumbents push for tech-only approaches that simply whitewash bad practices. For example, when Facebook rolls out “privacy-enhancing” advertising, but still collects every move you make, has control of all the data you put on its platform, and is embedded in nearly every website you visit, that privacy technology does little to protect you. We need to think beyond minor, superficial fixes.

Decoupling might seem strange at first, but it’s built on familiar ideas. Computing’s main tricks are abstraction and indirection. Abstraction involves hiding the messy details of something inside a nice clean package: when you use Gmail, you don’t have to think about the hundreds of thousands of Google servers that have stored or processed your data. Indirection involves creating a new intermediary between two existing things, such as when Uber wedged its app between passengers and drivers.

The cloud as we know it today is born of three decades of increasing abstraction and indirection. Communications, storage, and compute infrastructure for a typical company were once run on a server in a closet. Next, companies no longer had to maintain a server closet, but could rent a spot in a dedicated colocation facility. After that, colocation facilities decided to rent out their own servers to companies. Then, with virtualization software, companies could get the illusion of having a server while actually just running a virtual machine on a server they rented somewhere. Finally, with serverless computing and most types of software as a service, we no longer know or care where or how software runs in the cloud, just that it does what we need it to do.

[…]

We’re now at a turning point where we can add further abstraction and indirection to improve security, turning the tables on the cloud providers and taking back control as organizations and individuals while still benefiting from what they do.

The needed protocols and infrastructure exist, and there are services that can do all of this already, without sacrificing the performance, quality, and usability of conventional cloud services.

But we cannot just rely on industry to take care of this. Self-regulation is a time-honored stall tactic: a piecemeal or superficial tech-only approach would likely undermine the will of the public and regulators to take action. We need a belt-and-suspenders strategy, with government policy that mandates decoupling-based best practices, a tech sector that implements this architecture, and public awareness of both the need for and the benefits of this better way forward.

Source: Essays: Decoupling for Security – Schneier on Security

European digital identity: Council and Parliament reach a provisional agreement on eID

[…]

Under the new law, member states will offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving licence, diplomas, bank account). Citizens will be able to prove their identity and share electronic documents from their digital wallets with a click of a button on their mobile phone.

The new European digital identity wallets will enable all Europeans to access online services with their national digital identification, which will be recognised throughout Europe, without having to use private identification methods or unnecessarily sharing personal data. User control ensures that only information that needs to be shared will be shared.

Concluding the initial provisional agreement

Since the initial provisional agreement on some of the main elements of the legislative proposal at the end of June this year, a thorough series of technical meetings followed in order to complete a text that allowed the finalisation of the file in full. Some relevant aspects agreed by the co-legislators today are:

  • the e-signatures: the wallet will be free to use for natural persons by default, but member states may provide for measures to ensure that the free-of-charge use is limited to non-professional purposes
  • the wallet’s business model: the issuance, use and revocation will be free of charge for all natural persons
  • the validation of electronic attestation of attributes: member states shall provide free-of-charge validation mechanisms only to verify the authenticity and validity of the wallet and of the relying parties’ identity
  • the code for the wallets: the application software components will be open source, but member states are granted necessary leeway so that, for justified reasons, specific components other than those installed on user devices may not be disclosed
  • consistency between the wallet as an eID means and the underpinning scheme under which it is issued has been ensured

Finally, the revised law clarifies the scope of the qualified web authentication certificates (QWACs), which ensures that users can verify who is behind a website, while preserving the current well-established industry security rules and standards.

Next steps

Technical work will continue to complete the legal text in accordance with the provisional agreement. When finalised, the text will be submitted to the member states’ representatives (Coreper) for endorsement. Subject to a legal/linguistic review, the revised regulation will then need to be formally adopted by the Parliament and the Council before it can be published in the EU’s Official Journal and enter into force.

[…]

Source: European digital identity: Council and Parliament reach a provisional agreement on eID – Consilium

What does that free vs ad supported Facebook / Instagram warning mean, why is it there?

facebook ads choice

In the EU, Meta has given you a warning saying that you need to choose for an expensive ad free version or continue using targetted adverts. Strangely, considering Meta makes it’s profits by selling your information, you don’t get the option to be paid a cut of the profits they gain by selling your information. Even more strangely, not many people are covering it. Below is a pretty good writeup of the situation, but what is not clear is whether by agreeing to the free version, things continue as they are, or are you signing up for additional invasions into your privacy, such as sending your information to servers into the USA.

Even though it’s a seriously and strangely underreported phenomenon, people are leaving Meta for fear (justly or unjustly) of further intrusions into their privacy by the slurping behemoth.

Why is Meta launching an ad-free plan for Instagram and Facebook?

After receiving major backlash from the European Union in January 2023, resulting in a €377 million fine for the tech giant, Meta has since adapted their applications to suit EU regulations. These major adaptions have all led to the recent launch of their ad-free subscription service.

This most recent announcement comes to keep in line with the European Union’s Digital Marketers Act legislation. The legislation requires companies to give users the option to give consent before being tracked for advertising reasons, something Meta previously wasn’t doing.

As a way of complying with this rule while also sustaining its ad-supported business model, Meta is now releasing an ad-free subscription service for users who don’t want targeted ads showing up on their Instagram and Facebook feeds while also putting some more cash in the company’s pocket.

How much will the ad-free plan cost on Instagram and Facebook?

facebook-on-laptop
Austin Distel on Unsplash

The price depends on where you purchase the subscription. If you purchase the ad-free plan from Meta for your desktop, then the plan will cost €9.99/month. If you purchase on your Android or IOS device, the plan will cost €12.99/month. Presumably, this is because Apple and Google charge fees, and Meta is passing those fees along to the user instead of taking a hit on its profit.

If I buy the plan on desktop, will the subscription carry over to my phone?

Yes! It’s confusing at first, but no matter where you sign up for your subscription, it will automatically link to all your meta accounts, allowing you to view ad-free content on every device. Essentially, if you have access to a desktop and are interested in signing up for the ad-free plan, you’re better off signing up there, as you’ll save some money.

When will the ad-free plan be available to Instagram and Facebook users?

The subscription will be available for users in November 2023. Meta didn’t announce a specific date.

“In November, we will be offering people who use Facebook or Instagram and reside in these regions the choice to continue using these personalised services for free with ads, or subscribe to stop seeing ads.”

Can I still use Instagram and Facebook without subscribing to Meta’s ad-free plan?

Meta’s statement said that it believes “in an ad-supported internet, which gives people access to personalized products and services regardless of their economic status.” Staying true to its beliefs, Meta will still allow users to use its services for free with ads.

The Onyx Boox Tab Mini C running the Instagram app.

However, it’s important to note that Meta mentioned in its statement, “Beginning March 1, 2024, an additional fee of €6/month on the web and €8/month on iOS and Android will apply for each additional account listed in a user’s Account Center.” So, for now, the subscription will cover accounts on all platforms, but the cost will rise in the future for users with more than one account

Which countries will get the new. ad-free subscription option?

The below countries can access Meta’s new subscription:

Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lichtenstein, Lithuania, Luxembourg, Malta, Norway, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Switzerland and Sweden.

Will Meta launch this ad-free plan outside the EU and Switzerland?

It’s unknown at the moment whether Meta plans to expand this service into any other regions. Currently, the only regions able to subscribe to an ad-free plan are those listed above, but if it’s successful in those countries, it’s possible that Meta could roll it out in other regions.

What’s the difference between Meta Verified and this ad-free plan?

Launched in early 2023, Meta Verified allows Facebook and Instagram users to pay for a blue tick mark next to their name. Yes, the same tick mark most celebrities with major followings typically have. This subscription service was launched as a way for users to protect their accounts and promote their businesses. Meta Verified costs $14.99/month (€14/month). It gives users the blue tick mark and provides extra account support and protection from impersonators.

How to apply to be verified on Instagram image 1
Unsplash/Pocket-lint

While Meta Verified offers several unique account privacy features for users, it doesn’t offer an ad-free subscription. Currently, those subscribed to Meta Verified must also pay for an ad-free account if they live in one of the supported countries.

How can I sign up for Meta’s ad-free plan for Instagram and Facebook?

Users can sign up for the ad-free subscription via their Facebook or Instagram accounts. Here’s what you need to sign up:

  1. Go to account settings on Facebook or Instagram.
  2. Click subscribe on the ad-free plan under the subscriptions tab (once it’s available).

If I choose not to subscribe, will I receive more ads than I do now?

Meta says that nothing will change about your current account if you choose to keep your account as is, meaning you don’t subscribe to the ad-free plan. In other words, you’ll see exactly the same amount of ads you’ve always seen.

How will this affect other social media platforms?

Paid subscriptions seem to be the trend among many social media platforms in the past couple of years. Snapchat hopped onto the trend early in the Summer of 2022 when they released Snapchat+, which allows premium users to pay $4/month to see where they rank on their friends’ best friends list, boost their stories, pin friends as their top best friends, and further customize their settings.

More notably, Twitter, famously bought by Elon Musk, who now rebranded the platform to “X,” released three different tiers of subscriptions meant to improve a user’s experience. The tiers include Basic, Premium, and Premium Plus. X’s latest release, the Premium+ tier, allows users to pay $16/month for an ad-free experience and the ability to edit or undo their posts.

TikTok 1
Pocket-lint

Other major apps, such as TikTok, have yet to announce any ad-free subscription plans, although it wouldn’t be shocking if they followed suit.

For Meta’s part, it claims to want its websites to remain a free ad-based revenue domain, but we’ll see how long that lasts, especially if its first two subscription offerings succeed.

This is the spin Facebook itself gives on the story: Facebook and Instagram to Offer Subscription for No Ads in Europe

What else is noteworthy, is that this comes as Youtube is installing spyware onto your computer to figure out if you are running an adblocker – also something not receiving enough attention.

See also: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

and YouTube cares less for your privacy than its revenues

Time to switch to alternatives!

Data broker’s staggering sale of sensitive info exposed in unsealed FTC filing

[…]

The FTC has accused Kochava of violating the FTC Act by amassing and disclosing “a staggering amount of sensitive and identifying information about consumers,” alleging that Kochava’s database includes products seemingly capable of identifying nearly every person in the United States.

According to the FTC, Kochava’s customers, ostensibly advertisers, can access this data to trace individuals’ movements—including to sensitive locations like hospitals, temporary shelters, and places of worship, with a promised accuracy within “a few meters”—over a day, a week, a month, or a year. Kochava’s products can also provide a “360-degree perspective” on individuals, unveiling personally identifying information like their names, home addresses, phone numbers, as well as sensitive information like their race, gender, ethnicity, annual income, political affiliations, or religion, the FTC alleged.

Beyond that, the FTC alleged that Kochava also makes it easy for advertisers to target customers by categories that are “often based on specific sensitive and personal characteristics or attributes identified from its massive collection of data about individual consumers.” These “audience segments” allegedly allow advertisers to conduct invasive targeting by grouping people not just by common data points like age or gender, but by “places they have visited,” political associations, or even their current circumstances, like whether they’re expectant parents. Or advertisers can allegedly combine data points to target highly specific audience segments like “all the pregnant Muslim women in Kochava’s database,” the FTC alleged, or “parents with different ages of children.”

[…]

According to the FTC, Kochava obtains data “from a myriad of sources, including from mobile apps and other data brokers,” which together allegedly connects a web of data that “contains information about consumers’ usage of over 275,000 mobile apps.”

The FTC alleged that this usage data is also invasive, allowing Kochava customers to track not just what apps a customer uses, but how long they’ve used the apps, what they do in the apps, and how much money they spent in the apps, the FTC alleged.

[…]

Kochava “actively promotes its data as a means to evade consumers’ privacy choices,” the FTC alleged. Further, the FTC alleged that there are no real ways for consumers to opt out of Kochava’s data marketplace, because even resetting their mobile advertising IDs—the data point that’s allegedly most commonly used to identify users in its database—won’t stop Kochava customers from using its products to determine “other points to connect to and securely solve for identity.”

[…]

Kochava hoped the court would impose sanctions on the FTC because Kochava argued that many of the FTC’s allegations were “knowingly false.” But Winmill wrote that the bar for imposing sanctions is high, requiring that Kochava show that the FTC’s complaint was not just implausibly pled, but “clearly frivolous,” raised “without legal foundation,” or “brought for an improper purpose.”

In the end, Winmill denied the request for sanctions, partly because the court could not identify a “single” allegation in the FTC complaint flagged by Kochava as false that actually appeared “false or misleading,” the judge wrote.

Instead, it seemed like Kochava was attempting to mislead the court.

[…]

“The Court concludes that the FTC’s legal and factual allegations are not frivolous,” Winmill wrote, dismissing Kochava’s motion for sanctions. The judge concluded that Kochava’s claims that the FTC intended to harass and generate negative publicity about the data broker were ultimately “long on hyperbole and short on facts.”

Source: Data broker’s “staggering” sale of sensitive info exposed in unsealed FTC filing | Ars Technica

US Court rules automakers can record and save owner text messages and call logs

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs.

The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened.

In an example of the issues at stake, plaintiffs in one of the five cases filed suit against Honda in 2021, arguing that beginning in at least 2014 infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system.

An Annapolis, Maryland-based company, Berla Corporation, provides the technology to some car manufacturers but does not offer it to the general public, the lawsuit said. Once messages are downloaded, Berla’s software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

Many car manufacturers are selling car owners’ data to advertisers as a revenue boosting tactic, according to earlier reporting by Recorded Future News. Automakers are exponentially increasing the number of sensors they place in their cars every year with little regulation of the practice.

Source: Court rules automakers can record and intercept owner text messages

WhatsApp will let you hide your IP address from whoever you call

A new feature in WhatsApp will let you hide your IP address from whoever you call using the app. Knowing someone’s IP address can reveal a lot of personal information such as their location and internet service provider, so having the option to hide it is a major privacy win. “This new feature provides an additional layer of privacy and security geared towards our most privacy-conscious users,” WhatsApp wrote in a blog post.

WhatsApp currently relays calls either through its own servers or by establishing a direct connection called peer-to-peer with whoever you are calling depending on network conditions. Peer-to-peer calls often provide better voice quality, but require both devices to know each other’s IP addresses.

Once you turn the new feature, known simply as “Protect IP address in calls” on, however, WhatsApp will always relay your calls through its own servers rather than establishing a peer-to-peer connection, even if it means a slight hit to sound quality. All calls will continue to remain end-to-end encrypted, even if they go through WhatsApp’s servers, the company said.

WhatsApp has been adding more privacy features over the last few months. In June, the company added a feature that let people automatically silence unknown callers. It also introduced a “Privacy Checkup” section to allow users to tune up a host of privacy settings from a single place in the app, and earlier this year, added a feature that lets people lock certain chats with a fingerprint or facial recognition.

Source: WhatsApp will let you hide your IP address from whoever you call

So this means that Meta / Facebook / Whatsapp will now know who you are calling with, once you turn this privacy feature on. So to gain some privacy towards the end caller, you sacrifice privacy towards Meta.

In other news, it’s easy to find the IP address of someone you are whatsapping with

EU Commission’s nameless experts behind its “spy on all EU citizens” *cough* “child sexual abuse” law

The EU Ombudsman has found a case of maladministration in the European Commission’s refusal to provide the list of experts, which it first denied existing, with whom they worked together in drafting the regulation to detect and remove online child sexual abuse material.

Last December, the Irish Council for Civil Liberties (ICCL) filed complaints to the European Ombudsman against the European Commission for refusing to provide the list of external experts involved in drafting the regulation to detect and remove online child sexual abuse material (CSAM).

Consequently, the Ombudsman concluded that “the Commission’s failure to identify the list of experts as falling within the scope of the complainant’s public access request constitutes maladministration”.

The EU watchdog also slammed the Commission for not respecting the deadlines for handling access to document requests, delays that have become somewhat systematic.

The Commission told the Ombudsman inquiry team during a meeting that the requests by the ICCL “seemed to be requests to justify a political decision rather than requests for public access to a specific set of documents”.

The request was about getting access to the list of experts the Commission was in consultations with and who also participated in meetings with the EU Internet Forum, which took place in 2020, according to an impact assessment report dated 11 May 2022.

The main political groups of the EU Parliament reached an agreement on the draft law to prevent the dissemination of online child sexual abuse material (CSAM) on Tuesday (24 October).

The list of experts was of public interest because independent experts have stated on several occasions that detecting CSAM in private communications without violating encryption would be impossible.

The Commission, however, suggested otherwise in their previous texts, which has sparked controversy ever since the introduction of the file last year.

During the meetings, “academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion.”

Based on these discussions, and both oral and written inputs, an “outcome document” was produced, the Commission said.

According to a report about the meeting between the Commission and the Ombudsman, this “was the only document that was produced in relation to these workshops.”

The phantom list

While a list of participants does exist, it was not disclosed “for data protection and public security reasons, given the nature of the issues discussed”, the Commission said, according to the EU Ombudsman.

Besides security reasons, participants were also concerned about their public image, the Commission told the EU Ombudsman, adding that “disclosure could be exploited by malicious actors to circumvent detection mechanisms and moderation efforts by companies”.

Moreover, “revealing some of the strategies and tactics of companies, or specific technical approaches also carries a risk of informing offenders on ways to avoid detection”.

However, the existence of this list was at first denied by the Commission.

Kris Shrishak, senior fellow at the Irish Council for Civil Liberties, told Euractiv that the Commission had told him that no such list exists. However, later on, he was told by the EU Ombudsman that that was not correct since they found a list of experts.

The only reason the ICCL learned that there is a list is because of the Ombudsman, Shrishak emphasised.

Previously, the Commission said there were email exchanges about the meetings, which contained only the links to the online meetings.

“Following the meeting with the Ombudsman inquiry team, the Commission tried to retrieve these emails” but since they were more than two years old at the time, “they had already been deleted in line with the Commission’s retention policy” and were “not kept on file”.

Euractiv reached out to the European Commission for a comment but did not get a response by the time of publication.

Source: EU Commission’s nameless experts behind its child sexual abuse law – EURACTIV.com

This law is an absolute travesty – it’s talking about the poor children (how can we not protect them!) whilst being a wholesale surveillance law being put in by nameless faces and unelected officials.

See also: EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

They basically want to spy on all electronic signals. All of them. Without a judge.

Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway – for pennies

[…]

Researchers at Duke University released a study on Monday tracking what measures data brokers have in place to prevent unidentified or potentially malign actors from buying personal data on members of the military. As it turns out, the answer is often few to none — even when the purchaser is actively posing as a foreign agent.

A 2021 Duke study by the same lead researcher revealed that data brokers advertised that they had access to — and were more than happy to sell —information on US military personnel. In this more recent study researchers used wiped computers, VPNs, burner phones bought with cash and other means of identity obfuscation to go undercover. They scraped the websites of data brokers to see which were likely to have available data on servicemembers. Then they attempted to make those purchases, posing as two entities: datamarketresearch.org and dataanalytics.asia. With little-or-no vetting, several of the brokers transferred the requested data not only to the presumptively Chicago-based datamarketresearch, but also to the server of the .asia domain which was located in Singapore. The records only cost between 12 to 32 cents a piece.

The sensitive information included health records and financial information. Location data was also available, although the team at Duke decided not to purchase that — though it’s not clear if this was for financial or ethical reasons. “Access to this data could be used by foreign and malicious actors to target active-duty military personnel, veterans, and their families and acquaintances for profiling, blackmail, targeting with information campaigns, and more,” the report cautions. At an individual level, this could also include identity theft or fraud.

This gaping hole in our national security apparatus is due in large part to the absence of comprehensive federal regulations governing either individual data privacy, or much of the business practices engaged in by data brokers. Senators Elizabeth Warren, Bill Cassidy and Marco Rubio introduced the Protecting Military Service Members’ Data Act in 2022 to give power to the Federal Trade Commission to prevent data brokers from selling military personnel information to adversarial nations. They reintroduced the bill in March 2023 after it stalled out. Despite bipartisan support, it still hasn’t made it past the introduction phase.

Source: Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway

YouTube cares less for your privacy than its revenues

YouTube wants its pound of flesh. Disable your ad blocker or pay for Premium, warns a new message being shown to an unsuspecting test audience, with the barely hidden subtext of “you freeloading scum.” Trouble is, its ad blocker detecting mechanism doesn’t exactly comply with EU law, say privacy activists. Ask for user permission or taste regulatory boot. All good clean fun.

Privacy advocate challenges YouTube’s ad blocking detection scripts under EU law

READ MORE

Only it isn’t. It’s profoundly depressing. The battleground between ad tech and ad blockers has been around so long that in the internet’s time span it’s practically medieval. In 2010, Ars Technica started blocking ad blockers; in under a day, the ad blocker blocker was itself blocked by the ad blockers. The editor then wrote an impassioned plea saying that ad blockers were killing online journalism. As the editor ruefully notes, people weren’t using blockers because they didn’t care about the good sites, it was because so much else of the internet was filled with ad tech horrors.

Nothing much has changed. If your search hit ends up with an “ERROR: Ad blocker detected. Disable it to access this content” then it’s browser back button and next hit down, all day, every day. It’s like running an app that asks you to disable your firewall; that app is never run again. Please disable my ad blocker? Sure, if you stop pushing turds through my digital letterbox.

The reason YouTube has been dabbling with its own “Unblock Or Eff Off” strategy instead of bringing down the universal banhammer is that it knows how much it will upset the balance of the ecosystem. That it’s had to pry deep enough into viewers’ browsers to trigger privacy laws shows just how delicate that balance is. It’s unstable because it’s built on bad ideas.

In that ecosystem of advertisers, content consumers, ad networks, and content distributors, ad blockers aren’t the disease, they’re the symptom. Trying to neutralize a symptom alone leaves the disease thriving while the host just gets sicker. In this case, the disease isn’t cynical freeloading by users, it’s the basic dishonesty of online advertising. It promises things to advertisers that it cannot deliver, while blocking better ways of working. It promises revenue to content providers while keeping them teetering on the brink of unviability, while maximizing its own returns. Google has revenues in the hundreds of billions of dollars, while publishers struggle to survive, and users have to wear a metaphorical hazmat suit to stay sane. None of this is healthy.

Content providers have to be paid. We get that. Advertising is a valid way of doing that. We get that too. Advertisers need to reach audiences. Of course they do. But like this? YouTube needs its free, ad-supported model, or it would just force Premium on everyone, but forcing people to watch adverts will not force them to pony up for what’s being advertised.

The pre-internet days saw advertising directly support publishers who knew how to attract the right audiences who would respond well to the right adverts. Buy a computer magazine and it would be full of adverts for computer stuff – much of which you’d actually want to look at. The publisher didn’t demand you have to see ads for butter or cars or some dodgy crypto. That model has gone away, which is why we need ad blockers.

YouTube’s business model is a microcosm of the bigger ad tech world, where it basically needs to spam millions to generate enough results for its advertisers. It cannot stomach ad blockers, but it can’t neutralize them technically or legally. So it should treat them like the cognitive firewalls they are. If YouTube developed ways to control what and how adverts appeared back into the hands of its content providers and viewers, perhaps we’d tell our ad blockers to leave YouTube alone – punch that hole through the firewall for the service you trust. We’d get to keep blocking things that needed to be blocked, content makers could build their revenues by making better content, and advertisers would get a much better return on their ad spend.

Of course, this wouldn’t provide the revenues to YouTube or the ad tech business obtainable by being spammy counterfeits of responsible companies with a lock on the market. That a harmful business model makes a shipload of money does not make it good, in fact quite the reverse.

So, to YouTube we say: you appear to be using a bad lock-in. Disable it, or pay the price

Source: YouTube cares less for your privacy than its revenues • The Register

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

The EU Commission has been pushing client-side scanning for well over a year. This new intrusion into private communications has been pitched as perhaps the only way to prevent the sharing of child sexual abuse material (CSAM).

Mandates proposed by the EU government would have forced communication services to engage in client-side scanning of content. This would apply to every communication or service provider. But it would only negatively affect providers incapable of snooping on private communications because their services are encrypted.

Encryption — especially end-to-end encryption — protects the privacy and security of users. The EU’s pitch said protecting more than the children was paramount, even if it meant sacrificing the privacy and security of millions of EU residents.

Encrypted services would have been unable to comply with the mandate without stripping the client-side end from their end-to-end encryption. So, while it may have been referred to with the legislative euphemism “chat control” by EU lawmakers, the reality of the situation was that this bill — if passed intact — basically would have outlawed E2EE.

Fortunately, there was a lot of pushback. Some of it came from service providers who informed the EU they would no longer offer their services in EU member countries if they were required to undermine the security they provided for their users.

The more unexpected resistance came from EU member countries who similarly saw the gaping security hole this law would create and wanted nothing to do with it. On top of that, the EU government’s own lawyers told the Commission passing this law would mean violating other laws passed by this same governing body.

This pushback was greeted by increasingly nonsensical assertions by the bill’s supporters. In op-eds and public statements, backers insisted everyone else was wrong and/or didn’t care enough about the well-being of children to subject every user of any communication service to additional government surveillance.

That’s what happened on the front end of this push to create a client-side scanning mandate. On the back end, however, the EU government was trying to dupe people into supporting their own surveillance with misleading ads that targeted people most likely to believe any sacrifice of their own was worth making when children were on the (proverbial) line.

That’s the unsettling news being delivered to us by Vas Panagiotopoulos for Wired. A security researcher based in Amsterdam took a long look at apparently misleading ads that began appearing on Twitter as the EU government amped up its push to outlaw encryption.

Danny Mekić was digging into the EU’s “chat control” law when he began seeing disturbing ads on Twitter. These ads featured young women being (apparently) menaced by sinister men, backed by a similarly dark background and soundtrack. The ads displayed some supposed “facts” about the sexual abuse of children and ended with the notice that the ads had been paid for by the EU Commission.

The ads also cited survey results that supposedly said most European citizens supported client-side scanning of content and communications, apparently willing to sacrifice their own privacy and security for the common good.

But Mekić dug deeper and discovered the cited survey wasn’t on the level.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

This discovery prompted Mekić to dig even deeper. What Mekić found was that the ads were very tightly targeted — so tightly targeted, in fact, that they could not have been deployed in this manner without violating European laws that are aimed to prevent exactly this sort of targeting, i.e. by using “sensitive data” like religious beliefs and political affiliations.

The ads were extremely targeted, meant to find people most likely to be swayed towards the EU Commission’s side, either because the targets never appeared to distrust their respective governments or because their governments had yet to tell the EU Commission to drop its proposed anti-encryption proposal.

Mekić found that the ads were meant to be seen by select targets, such as top ministry officials, while they were concealed from people interested in Julian Assange, Brexit, EU corruption, Eurosceptic politicians (Marine Le Pen, Nigel Farage, Viktor Orban, Giorgia Meloni), the German right-wing populist party AfD, and “anti-Christians.”

Mekić then found out that the ads, which have garnered at least 4 million views, were only displayed in seven EU countries: the Netherlands, Sweden, Belgium, Finland, Slovenia, Portugal, and the Czech Republic.

A document leaked earlier this year exposed which EU members were in favor of client-side scanning and its attendant encryption backdoors, as well as those who thought the proposed mandate was completely untenable.

The countries targeted by the EU Commission ad campaign are, for the most part, supportive of/indifferent to broken encryption, client-side scanning, and expanded surveillance powers. Slovenia (along with Spain, Cyprus, Lithuania, Croatia, and Hungary) were all firmly in favor of bringing an end to end-to-end encryption.

[…]

While we’re accustomed to politicians airing misleading ads during election runs, this is something different. This is the representative government of several nations deliberately targeting countries and residents it apparently thinks might be receptive to its skewed version of the facts, which comes in the form of the presentation of misleading survey results against a backdrop of heavily-implied menace. And that’s on top of seeming violations of privacy laws regarding targeted ads that this same government body created and ratified.

It’s a tacit admission EU proposal backers think they can’t win this thing on its merits. And they can’t. The EU Commission has finally ditched its anti-encryption mandates after months of backlash. For the moment, E2EE survives in Europe. But it’s definitely still under fire. The next exploitable tragedy will bring with it calls to reinstate this part of the “chat control” proposal. It will never go away because far too many governments believe their citizens are obligated to let these governments shoulder-surf whenever they deem it necessary. And about the only thing standing between citizens and that unceasing government desire is end-to-end encryption.

Source: EU Pitched Client-Side Scanning By Targeting Certain EU Residents With Misleading Ads | Techdirt

As soon as you read that legislation is ‘for the kids’ be very very wary – as it’s usually for something completely beyond that remit. And this kind of legislation is the installation of Big Brother on every single communications line you use.

Drugmakers Are Set To Pay 23andMe Millions To Access Your DNA – which is also your families DNA

GSK will pay 23andMe $20 million for access to the genetic-testing company’s vast trove of consumer DNA data, extending a five-year collaboration that’s allowed the drugmaker to mine genetic data as it researches new medications.

Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body’s immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK’s new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Source: Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA – Slashdot

So – you paid for a DNA test and it turns out you didn’t think of the privacy aspect at all. Neither did you think up that you gave up your families DNA. Or that you can’t actually change your DNA either. Well done. It’s being spread all over the place. And no, the data is not anonymous – DNA is the most personal information you can give up ever.

Apple’s MAC Address Privacy Feature Has Never Worked

Ever since Apple re-branded as the “Privacy” company several years back, it’s been rolling out features designed to show its commitment to protecting users. Yet while customers might feel safer using an iPhone, there’s already plenty of evidence that Apple’s branding efforts don’t always match the reality of its products. In fact, a lot of its privacy features don’t actually seem to work.

Case in point: new research shows that one of Apple’s proffered privacy tools—a feature that was supposed to anonymize mobile users’ connections to Wifi—is effectively “useless.” In 2020, Apple debuted a feature that, when switched on, was supposed to hide an iPhone user’s media access control—or MAC—address. When a device connects to a WiFi network, it must first send out its MAC address so the network can identify it; when the same MAC address pops up in network after network, it can be used to by network observers to identify and track a specific mobile user’s movements.

Apple’s feature was supposed to provide randomized MAC addresses for users as a way of stop this kind of tracking from happening. But, apparently, a bug in the feature persisted for years that made the feature effectively useless.

According to a new report from Ars Technica, researchers recently tested the feature to see if it actually concealed their MAC addresses, only to find that it didn’t do that at all. Ars writes:

Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

One of the researchers behind the discovery of the vulnerability, Tommy Mysk, told Ars that, from the jump, “this feature was useless because of this bug,” and that, try as they might, he “couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

What Apple’s justification for advertising a feature that just plainly does not work is, I’m not sure. Gizmodo reached out to the company for comment and will update this story if they respond. A recent update, iOS 17.1, apparently patches the problem and ensures that the feature actually works.

Source: Apple’s MAC Address Privacy Feature Has Never Worked

Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework

The Data Privacy Framework (DPF) presents new legal guidance to facilitate personal data sharing between US companies and their counterparts in the EU and the UK. This framework empowers individuals with greater control over their personal data and streamlines business operations by creating common rules around interoperable dataflows. Moreover, the DPF will help enable clear contract terms and business codes of conduct for corporations that collect, use, and transfer personal data across borders.

Any business that collects data related to people in the EU must comply with the EU’s General Data Protection Regulation (GDPR), which is the toughest privacy and security law across the globe. Thus, the DPF helps US corporations avoid potentially hefty fines and penalties by ensuring their data transfers align with GDPR regulations.

Data transfer procedures, which were historically time-consuming and riddled with legal complications, are now faster and more straightforward with the DPF, which allows for more transatlantic dataflows agreed on by US companies and their EU and UK counterparts. On July 10, 2023, the European Commission finalized an adequacy decision that assures the US offers data protection levels similar to the EU’s.

[…]

US companies can register with the DPF through the Department of Commerce DPF website. Companies that previously self-certified compliance with the EU-US Privacy Shield can transition to DPF by recertifying their adherence to DPF principles, including updating privacy policies to reflect any change in procedures and data subject rights that are crucial for this transition. Businesses should develop privacy policies that identify an independent recourse mechanism that can address data protection concerns. To qualify for the DPF the company must fall under the jurisdiction of either the Federal Trade Commission or the US Department of Transportation, though this reach may broaden in the future.

Source: Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework | American Enterprise Institute – AEI

The whole self-certification things seems leaky as a sieve to me… And once data has gone into the US intelligence services you can assume it will go everywhere and there will be no stopping it from the EU side.

Equifax poked with paltry $13.4 million following 147m customer data breach in 2017

Credit bureau company, Equifax, has been fined US$13.4 million by The Financial Conduct Authority (FCA), a UK financial watchdog, following its involvement in “one of the largest” data breaches ever.

This cyber security incident took place in 2017 and saw Equifax’s US-based parent company, Equifax Inc., suffer a data breach that saw the personal data of up to 147.9 million customers accessed by malicious actors during the hack. The FCA also revealed that, as this data was stored in company servers in the US, the hack also exposed the personal data of 13.8 million UK customers.

The data accessed during the hack included Equifax membership login details, customer names, dates of birth, partial credit card details and addresses.

According the FCA, the cyber attack and subsequent data breach was “entirely preventable” and exposed UK customers to financial crime.
“There were known weaknesses in Equifax Inc’s data security systems and Equifax failed to take appropriate action in response to protect UK customer data,” the FCA explained.

The authority also noted that the UK arm of Equifax was not made aware that malicious actors had been accessed during the hack until six weeks after the cyber security incident was discovered by Equifax Inc.

The company was fined $60,727 by the British Information Commissioner’s Office (ICO) relating to the data breach in 2018.

On October 13th, Equifax stated that it had fully cooperated with the FCA during the investigation, which has been extensive. The FCA also said that the fine levelled at Equifax Inc had been reduced following the company’s agreement to cooperate with the watchdog and resolve the cyber attack.

Patricio Remon, president for Europe at Equifax, said that since the cyber attack against Equifax in 2017, the company has “invested over $1.5 billion in a security and technology transformation”. Remon also said that “few companies have invested more time and resources than Equifax to ensure that consumers’ information is protected”.

Source: Equifax fined $13.4 million following data breach

ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Philips Hue / Signify Ecosystem: ‘Collapsing Into Stupidity’

The Philips Hue ecosystem of home automation devices is “collapsing into stupidity,” writes Rachel Kroll, veteran sysadmin and former production engineer at Facebook. “Unfortunately, the idiot C-suite phenomenon has happened here too, and they have been slowly walking down the road to full-on enshittification.” From her blog post: I figured something was up a few years ago when their iOS app would block entry until you pushed an upgrade to the hub box. That kind of behavior would never fly with any product team that gives a damn about their users — want to control something, so you start up the app? Forget it, we are making you placate us first! How is that user-focused, you ask? It isn’t.

Their latest round of stupidity pops up a new EULA and forces you to take it or, again, you can’t access your stuff. But that’s just more unenforceable garbage, so who cares, right? Well, it’s getting worse.

It seems they are planning on dropping an update which will force you to log in. Yep, no longer will your stuff Just Work across the local network. Now it will have yet another garbage “cloud” “integration” involved, and they certainly will find a way to make things suck even worse for you. If you have just the lights and smart outlets, Kroll recommends deleting the units from the Hue Hub and adding them to an IKEA Dirigera hub. “It’ll run them just fine, and will also export them to HomeKit so that much will keep working as well.” That said, it’s not a perfect solution. You will lose motion sensor data, the light level, the temperature of that room, and the ability to set custom behaviors with those buttons.

“Also, there’s no guarantee that IKEA won’t hop on the train to sketchville and start screwing over their users as well,” adds Kroll.

Source: Is the Philips Hue Ecosystem ‘Collapsing Into Stupidity’? – Slashdot

Philips Hue will force users to upload their data to Hue cloud – changing their TOS after you bought the product for not needing an account

Today’s story is about Philips Hue by Signify. They will soon start forcing accounts on all users and upload user data to their cloud. For now, Signify says you’ll still be able to control your Hue lights locally as you’re currently used to, but we don’t know if this may change in the future. The privacy policy allows them to store the data and share it with partners.

[…]

When you open the Philips Hue app you will now be prompted with a new message: Starting soon, you’ll need to be signed in.

[…]

So today, you can choose to not share your information with Signify by not creating an account. But this choice will soon be taken away and all users need to share their data with Philips Hue.

Confirming the news

I didn’t want to cry wolf, so I decided to verify the above statement with Signify. They sadly confirmed:

Twitter conversation with Philips Hue (source: Twitter)

The policy they are referring to is their privacy policy (April 2023 edition, download version).

[…]

When asked what drove this change, the answer is the usual: security. Well Signify, you know what keeps user data even more secure? Not uploading it all to your cloud.

[…]

As a user, we encourage you to reach out to Signify support and voice your concern.

NOTE: Their support form doesn’t work. You can visit their Facebook page though

Dear Signify, please reconsider your decision and do not move forward with it. You’ve reversed bad decisions before. People care about privacy and forcing accounts will hurt the brand in the long term. The pain caused by this is not worth the gain.

Source: Philips Hue will force users to upload their data to Hue cloud

No, Philips / Signify – I have used these devices for years without having to have an account or be connected to the internet. It’s one of the reasons I bought into Hue. Making us give up data to use something we bought after we bought it is a dangerous decision considering the private and exploitable nature of the data, as well as greedy and rude.

T-Mobile US exposes some customer data, but don’t say breach

T-Mobile US has had another bad week on the infosec front – this time stemming from a system glitch that exposed customer account data, followed by allegations of another breach the carrier denied.

According to customers who complained of the issue on Reddit and X, the T-Mobile app was displaying other customers’ data instead of their own – including the strangers’ purchase history, credit card information, and address.

This being T-Mobile’s infamously leaky US operation, people immediately began leaping to the obvious conclusion: another cyber attack or breach.

“There was no cyber attack or breach at T-Mobile,” the telco assured us in an emailed statement. “This was a temporary system glitch related to a planned overnight technology update involving limited account information for fewer than 100 customers, which was quickly resolved.”

Note, as Reddit poster Jman100_JCMP did, T-Mobile means fewer than 100 customers had their data exposed – but far more appear to have been able to view those 100 customers’ data.

As for the breach, the appearance of exposed T-Mobile data was alleged by malware repository vx-underground’s X (Twitter) account. The Register understands T-Mobile examined the data and determined that independently owned T-Mobile dealer, Connectivity Source, was the source – resulting from a breach it suffered in April. We understand T-Mobile believes vx-underground misinterpreted a data dump.

Connectivity Source was indeed the subject of a breach in April, in which an unknown attacker made off with employee data including names and social security numbers – around 17,835 of them from across the US, where Connectivity appears to do business exclusively as a white-labelled T-Mobile US retailer.

Looks like the carier really dodged the bullet on this one – there’s no way Connectivity Source employees could be mistaken for its own staff.

T-Mobile US has already experienced two prior breaches this year, but that hasn’t imperilled the biz much – its profits have soared recently and some accompanying sizable layoffs will probably keep things in the black for the foreseeable future.

Source: T-Mobile US exposes some customer data, but don’t say breach • The Register

Dutch privacy watchdog SDBN sues twitter for collecting and selling data via Mohub (wordfeud, duolingo, etc) without notifying users

The Dutch Data Protection Foundation (SDBN) wants to enforce a mass claim for 11 million people through the courts against social media company X, the former Twitter. Between 2013 and 2021, that company owned the advertising platform MoPub, which, according to the privacy foundation, illegally traded in data from users of more than 30,000 free apps such as Wordfeud, Buienradar and Duolingo.

SDBN has been trying to reach an agreement with X since November last year, but according to the foundation, without success. That is why SDBN is now starting a lawsuit at the Rotterdam court. Central to this is MoPub’s handling of personal data such as religious beliefs, sexual orientation and health. In addition to compensation, SDBN wants this data to be destroyed.

The foundation also believes that users are entitled to profit contributions. A lot of money can be made by sharing personal data with thousands of companies, says SDBN chairman Anouk Ruhaak. Although she says it is difficult to find out exactly which companies had access to the data. “By holding X. Corp liable, we hope not only to obtain compensation for all victims, but also to put a stop to this type of practice,” said Ruhaak. “Unfortunately, these types of companies often only listen when it hurts financially.”

Source: De Ondernemer | Privacystichting SDBN wil via rechter massaclaim bij…

Join the claim here

Google Chrome’s Privacy Sandbox: any site can now query all your habits

[…]

Specifically, the web giant’s Privacy Sandbox APIs, a set of ad delivery and analysis technologies, now function in the latest version of the Chrome browser. Website developers can thus write code that calls those APIs to deliver and measure ads to visitors with compatible browsers.

That is to say, sites can ask Chrome directly what kinds of topics you’re interested in – topics automatically selected by Chrome from your browsing history – so that ads personalized to your activities can be served. This is supposed to be better than being tracked via third-party cookies, support for which is being phased out. There are other aspects to the sandbox that we’ll get to.

While Chrome is the main vehicle for Privacy Sandbox code, Microsoft Edge, based on the open source Chromium project, has also shown signs of supporting the technology. Apple and Mozilla have rejected at least the Topics API for interest-based ads on privacy grounds.

[…]

“The Privacy Sandbox technologies will offer sites and apps alternative ways to show you personalized ads while keeping your personal information more private and minimizing how much data is collected about you.”

These APIs include:

  • Topics: Locally track browsing history to generate ads based on demonstrated user interests without third-party cookies or identifiers that can track across websites.
  • Protected Audience (FLEDGE): Serve ads for remarketing (e.g. you visited a shoe website so we’ll show you a shoe ad elsewhere) while mitigating third-party tracking across websites.
  • Attribution Reporting: Data to link ad clicks or ad views to conversion events (e.g. sales).
  • Private Aggregation: Generate aggregate data reports using data from Protected Audience and cross-site data from Shared Storage.
  • Shared Storage: Allow unlimited, cross-site storage write access with privacy-preserving read access. In other words, you graciously provide local storage via Chrome for ad-related data or anti-abuse code.
  • Fenced Frames: Securely embed content onto a page without sharing cross-site data. Or iframes without the security and privacy risks.

These technologies, Google and industry allies believe, will allow the super-corporation to drop support for third-party cookies in Chrome next year without seeing a drop in targeted advertising revenue.

[…]

“Privacy Sandbox removes the ability of website owners, agencies and marketers to target and measure their campaigns using their own combination of technologies in favor of a Google-provided solution,” James Rosewell, co-founder of MOW, told The Register at the time.

[…]

Controversially, in the US, where lack of coherent privacy rules suit ad companies just fine, the popup merely informs the user that these APIs are now present and active in the browser but requires visiting Chrome’s Settings page to actually manage them – you have to opt-out, if you haven’t already. In the EU, as required by law, the notification is an invitation to opt-in to interest-based ads via Topics.

Source: How Google Chrome’s Privacy Sandbox works and what it means • The Register

Google taken to court in NL for large scale privacy breaches

The Foundation for the Protection of Privacy Interests and the Consumers’ Association are taking the next step in their fight against Google. The tech company is being taken to court today for ‘large-scale privacy violations’.

The proceedings demand, among other things, that Google stop its constant surveillance and sharing of personal data through online advertising auctions and also pay damages to consumers. Since the announcement of this action on May 23, 2023, more than 82,000 Dutch people have already joined the mass claim.

According to the organizations, Google is acting in violation of Dutch and European privacy legislation. The tech giant collects users’ online behavior and location data on an immense scale through its services and products. Without providing enough information or having obtained permission. Google then shares that data, including highly sensitive personal data about health, ethnicity and political preference, for example, with hundreds of parties via its online advertising platform.

Google is constantly monitoring everyone. Even when using third-party cookies – which are invisible – Google continues to collect data through other people’s websites and apps, even when someone is not using its products or services. This enables Google to monitor almost the entire internet behavior of its users.

All these matters have been discussed with Google, to no avail.

The Foundation for the Protection of Privacy Interests represents the interests of users of Google’s products and services living in the Netherlands who have been harmed by privacy violations. The foundation is working together with the Consumers’ Association in the case against Google. Consumers’ Association Claimservice, a partnership between the Consumers’ Association and ConsumersClaim, processes the registrations of affiliated victims.

More than 82,000 consumers have already registered for the Google claim. They demand compensation of 750 euros per participant.

A lawsuit by the American government against Google starts today in the US . Ten weeks have been set aside for this. This mainly revolves around the power of Google’s search engine.

Essentially, Google is accused of entering into exclusive agreements to guarantee the use of its search engine. These are agreements that prevent alternative search engines from being pre-installed, or from Google’s search app being removed.

Source: Google voor de rechter gedaagd wegens ‘grootschalige privacyschendingen’ – Emerce (NL)