The EU wants to criminalize AI-generated deepfakes and the non-consensual sending of intimate images

[…] the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and “cyber-flashing,” or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven’t criminalized them yet. “This is an urgent issue to address, given the exponential spread and dramatic impact of violence online,” it wrote in its announcement.

[…]

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift’s face urged EU officials to move forward with the proposal.

[…]

“The final law is also pending adoption in Council and European Parliament,” the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

Source: The EU wants to criminalize AI-generated porn images and deepfakes

The original article has a seriously misleading title, I guess for clickbait.

Hundreds of thousands of EU citizens ‘wrongly fined for driving in London Ulez’ in one of EUs largest privacy breaches

Hundreds of thousands of EU citizens were wrongly fined for driving in London’s Ulez clean air zone, according to European governments, in what has been described as “possibly one of the largest data breaches in EU history”.

The Guardian can reveal Transport for London (TfL) has been accused by five EU countries of illegally obtaining the names and addresses of their citizens in order to issue the fines, with more than 320,000 penalties, some totalling thousands of euros, sent out since 2021.

[…]

Since Brexit, the UK has been banned from automatic access to personal details of EU residents. Transport authorities in Belgium, Spain, Germany and the Netherlands have confirmed to the Guardian that driver data cannot be shared with the UK for enforcement of London’s ultra-low emission zone (Ulez), and claim registered keeper details were obtained illegally by agents acting for TfL’s contractor Euro Parking Collection.

In France, more than 100 drivers have launched a lawsuit claiming their details were obtained fraudulently, while Dutch lorry drivers are taking legal action against TfL over £6.5m of fines they claim were issued unlawfully.

According to the Belgian MP Michael Freilich, who has investigated the issue on behalf of his constituents, TfL is treating European drivers as a “cash cow” by using data obtained illegitimately to issue unjustifiable fines.

Many of the penalties have been issued to drivers who visited London in Ulez-compliant vehicles and were not aware they had to be registered with TfL’s collections agent Euro Parking at least 10 days before their visit.

Failure to register does not count as a contravention, according to Ulez rules, but some drivers have nonetheless received penalties of up to five-figure sums.

[…]

Some low-emission cars have been misclassed as heavy goods diesel vehicles and fined under the separate low-emission zone (Lez) scheme, which incurs penalties of up to £2,000 a day. Hundreds of drivers have complained that the fines arrived weeks after the early payment discount and appeals deadlines had passed.

One French driver was fined £25,000 for allegedly contravening Lez and Ulez rules, despite the fact his minibus was exempt.

[…]

EU countries say national laws allow the UK to access personal data only for criminal offences, not civil ones. Breaching Ulez rules is a civil offence, while more risky behaviour such as speeding or driving under the influence of drink or drugs can be a criminal offence. This raises the question of whether Euro Parking can legally carry out its contract with TfL.

Euro Parking was awarded a five-year contract by TfL in 2020 to recover debts from foreign drivers who had breached congestion or emission zone rules.

The company, which is paid according to its performance, is estimated to have earned between £5m and £10m. It has the option to renew for a further five years.

The firm is owned by the US transport technology group Verra Mobility, which is listed on the Nasdaq stock exchange and headed by the former Bank of America Merrill Lynch executive David Roberts. The company’s net revenue was $205m (£161m) in the second quarter of 2023.

In October, the Belgian government ordered a criminal investigation after a court bailiff was accused of illegally passing the details of 20,000 drivers to Euro Parking for Ulez enforcement. The bailiff was suspended in 2022 and TfL initially claimed that no Belgian data had been shared with Euro Parking since then. However, a freedom of information request by the Guardian found that more than 17,400 fines had been issued to Belgians in the intervening 19 months.

[…]

Campaigners accuse Euro Parking of circumventing data protection rules by using EU-based agents to request driver data without disclosing that it is for UK enforcement.

Last year, an investigation by the Dutch vehicle licensing authority RDW found that the personal details of 55,000 citizens had been obtained via an NCP in Italy. “The NCP informed us that the authorised users have used the data in an unlawful way and stopped their access,” a spokesperson said.

The German transport authority KBA claimed that an Italian NCP was used to obtain information from its database. “Euro Parking obtained the data through unlawful use of an EU directive to facilitate the cross-border exchange of information about traffic offences that endanger road safety,” a KBA spokesperson said. “The directive does not include breaches of environmental rules.”

Spain’s transport department told the Guardian that UK authorities were not allowed access to driver details for Ulez enforcement. Euro Parking has sent more than 25,600 fines to Spanish drivers since 2021.

In France, 102 drivers have launched a lawsuit claiming that their details were fraudulently obtained

[…]

Source: Hundreds of thousands of EU citizens ‘wrongly fined for driving in London Ulez’ | TfL | The Guardian

I guess Brexit has panned out economically much worse than we thought

iPhone Apps Secretly Harvest Data When They Send You Notifications, Researchers Find

iPhone apps including Facebook, LinkedIn, TikTok, and X/Twitter are skirting Apple’s privacy rules to collect user data through notifications, according to tests by security researchers at Mysk Inc., an app development company. Users sometimes close apps to stop them from collecting data in the background, but this technique gets around that protection. The data is unnecessary for processing notifications, the researchers said, and seems related to analytics, advertising, and tracking users across different apps and devices.

It’s par for the course that apps would find opportunities to sneak in more data collection, but “we were surprised to learn that this practice is widely used,” said Tommy Mysk, who conducted the tests along with Talal Haj Bakry. “Who would have known that an innocuous action as simple as dismissing a notification would trigger sending a lot of unique device information to remote servers? It is worrying when you think about the fact that developers can do that on-demand.”

These particular apps aren’t unusual bad actors. According to the researchers, it’s a widespread problem plaguing the iPhone ecosystem.

This isn’t the first time Mysk’s tests have uncovered data problems at Apple, which has spent untold millions convincing the world that “what happens on your iPhone, stays on your iPhone.” In October 2023, Mysk found that a lauded iPhone feature meant to protect details about your WiFi address isn’t as private as the company promises. In 2022, Apple was hit with over a dozen class action lawsuits after Gizmodo reported on Mysk’s finding that Apple collects data about its users even after they flip the switch on an iPhone privacy setting that promises to “disable the sharing of device analytics altogether.”

The data looks like information that’s used for “fingerprinting,” a technique companies use to identify you based on several seemingly innocuous details about your device. Fingerprinting circumvents privacy protections to track people and send them targeted ads

[…]

For example, the tests showed that when you interact with a notification from Facebook, the app collects IP addresses, the number of milliseconds since your phone was restarted, the amount of free memory space on your phone, and a host of other details. Combining data like these is enough to identify a person with a high level of accuracy. The other apps in the test collected similar information. LinkedIn, for example, uses notifications to gather which timezone you’re in, your display brightness, and what mobile carrier you’re using, as well as a host of other information that seems specifically related to advertising campaigns, Mysk said.

[…]

Apps can collect this kind of data about you when they’re open, but swiping an app closed is supposed to cut off the flow of data and stop an app from running whatsoever. However, it seems notifications provide a backdoor.

Apple provides special software to help your apps send notifications. For some notifications, the app might need to play a sound or download text, images, or other information. If the app is closed, the iPhone operating system lets the app wake up temporarily to contact company servers, send you the notification, and perform any other necessary business. The data harvesting Mysk spotted happened during this brief window.

[…]

Source: iPhone Apps Secretly Harvest Data When They Send You Notifications, Researchers Find

France fines Amazon $35 million over intrusive employee surveillance

France’s data privacy watchdog organization, the CNIL, has fined a logistics subsidiary of Amazon €32 million, or $35 million in US dollars, over the company’s use of an “overly intrusive” employee surveillance system. The CNIL says that the system employed by Amazon France Logistique “measured work interruptions with such accuracy, potentially requiring employees to justify every break or interruption.”

Of course, this system was forced on the company’s warehouse workers, as they seem to always get the short end of the Amazon stick. The CNIL says the surveillance software tracked the inactivity of employees via a mandatory barcode scanner that’s used to process orders. The system tracks idle time as interruptions in barcode scans, calling out employees for periods of downtime as low as one minute. The French organization ruled that the accuracy of this system was illegal, using Europe’s General Data Protection Regulation (GDPR) as a legal basis for the ruling.

To that end, this isn’t being classified as a labor case, but rather a data processing case regarding excessive monitoring. “As implemented, the processing is considered to be excessively intrusive,” the CNIL wrote, noting that Amazon uses this data to assess employee performance on a weekly basis. The organization also noted that Amazon held onto this data for all employees and temporary workers.

[…]

Source: France fines Amazon $35 million over ‘intrusive’ employee surveillance

Dutch phones can be easily tracked online: ‘Extreme security risk’

a map of the netherlands with cellphone towers

BNR received more than 80 gigabytes of location data from data traders: the coordinates of millions of telephones, often registered dozens of times a day.

The gigantic mountain of data also includes movements of people with functions in which safety plays an important role. A senior army officer could be followed as he drove from his home in the Randstad to various military locations in the country. A destination he often visited was the Frederikazerne, headquarters of the Military Intelligence and Security Service (MIVD). The soldier confirmed the authenticity of the data to BNR by telephone.

[…]

The data also reveals the home address of someone who often visits the Penitentiary in Vught, where terrorists and serious criminals are imprisoned. A spokesperson for the Judicial Institutions Agency (DJI) confirmed that the person, who according to the Land Registry lives at this address, had actually brought a mobile phone onto the premises with permission and stated that the matter was being investigated.

These are just examples, the list of potential targets is long: up to 1,200 phones in the dataset visited the office in Zoetermeer where the National Police, National Public Prosecutor’s Office and Europol are located. Up to 70 telephones are registered in the King’s residential palace, Huis ten Bosch. At the Volkel Air Base, a storage point for nuclear weapons, up to 370 telephones were counted. The National Police’s management says it is aware of the problem and is ‘looking internally to see what measures are appropriate to combat this’.

‘National security implications’

BNR had two experts inspect the dataset. “This is an extreme security risk, with possible implications for national security,” says Ralph Moonen, technical director of Secura. “It’s really shocking that this can happen like this,” says Sjoerd van der Meulen, cybersecurity specialist at DataExpert.

The technology used to track mobile phones is designed for use by advertisers, but is suitable for other purposes, says Paul Pols, former technical advisor to the Assessment Committee for the Use of Powers, which supervises the intelligence services. According to Pols, it is known that the MIVD and AIVD also purchase access to this type of data on the data market under the heading ‘open sources’. “What is striking about this case is that you can easily access large amounts of data from Dutch citizens,” said the cybersecurity expert.

For sale via an online marketplace in Berlin

That access was achieved through an online marketplace based in Berlin. On this platform, Datarade.ai, hundreds of companies offer personal data for sale. In addition to location data, medical information and credit scores are also available.

Following a tip from a data subject, BNR responded to an advertisement offering location data of Dutch users. A sales employee of the platform then contacted two medium-sized providers: Datastream Group from Florida in the US and Factori.ai from Singapore – both companies have fewer than 50 employees, according to their LinkedIn pages.

Datastream and Factori offer similar services: a subscription to the location data of mobile phones in the Netherlands is available for prices starting from $2,000 per month. Those who pay more can receive fresh data every 24 hours via the cloud, possibly even from all over the world.

[…]

Upon request, BNR was therefore sent a full month of historical data from Dutch telephones. This data was anonymized – it did not contain telephone numbers. Individual phones can be recognized by unique number combinations, a ‘mobile advertising ID’ used by Apple and Google to show individual users relevant advertisements within the limits of European privacy legislation.

Possibly four million Dutch victims of tracking

The precise origin of the data traded online is unclear. According to the providers, these come from apps that have received permission from users to use location data. This includes fitness or navigation apps that sell data. This is how the data ultimately ends up at Factori and Datastream. By combining data from multiple sources, gigantic files are created.

[…]

it is not difficult to recognize the owners of individual phones in the data. By linking sleeping places to data from public registers, such as the Land Registry, and workplaces to LinkedIn profiles, BNR was able to identify, in addition to the army officer, a project manager from Alphen aan den Rijn and an amateur football referee. The discovery that he had been digitally stalked for at least a month led to shocked reactions. ‘Bizarre’, and: ‘I immediately turned off ‘sharing location data’ on my phone’.

Trade is prohibited, but the government does not act

Datarade, the Berlin data marketplace, informed BNR in an email that traders on their platform are ‘fully liable’ for the data they offer. Illegal practices can be reported using an online form. The spokesperson for the German company leaves open the question of whether measures are being taken against the sale of location data.

[…]

Source (Google Translate): Dutch phones can be secretly tracked online: ‘Extreme security risk’ | BNR News Radio

Source (Dutch original): Nederlandse telefoons online stiekem te volgen: ‘Extreem veiligheidsrisico’

Drivers would prefer to buy a low-tech car than one that shares their data

According to a survey of 2,000 Americans conducted by Kaspersky in November and published this week, 72 percent of drivers are uncomfortable with automakers sharing their data with advertisers, insurance companies, subscription services, and other third-party outfits. Specifically, 37.3 percent of those polled are “very uncomfortable” with this data sharing, and 34.5 percent are “somewhat uncomfortable.”

However, only 28 percent of the total respondents say they have any idea what kind of data their car is collecting. Spoiler alert: It’s potentially all the data. An earlier Mozilla Foundation investigation, which assessed the privacy policies and practices of 25 automakers, gave every single one a failing grade.

In Moz’s September Privacy Not Included report, the org warned that car manufacturers aren’t only potentially collecting and selling things like location history, driving habits and in-car browser histories. Some connected cars may also track drivers’ sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, if that information becomes available.

Back to the Kaspersky survey: 87 percent said automakers should be required to delete their data upon request. Depending on where you live, and thus the privacy law you’re under, the manufacturers may be obligated to do so.

Oddly, while motorists are worried about their cars sharing their data with third parties, they don’t seem that concerned about their vehicles snooping on them in the first place.

Less than half (41.8 percent) of respondents said they are worried about their vehicle’s sensors, infotainment system, cameras, microphones, and other connected apps and services might be collecting their personal data. And 80 percent of respondents pair their phone with their car anyway, allowing data and details of activities to be exchanged between apps and the vehicle and potentially its manufacturer.

This echoes another survey published this week that found many drivers are willing to trade their personal data and privacy for driver personalization — things like seat, mirror, and entertainment preferences (43 percent) — and better insurance rates (67 percent).

The study also surveyed 2,000 American drivers to come up with these numbers and found that while most drivers (68 percent) don’t mind automakers collecting their personal data, only five percent believe this surveillance should be unrestricted, and 63 percent said it should be on an opt-in basis.

Perhaps it’s time for vehicle makers to take note

Source: Surveyed drivers prefer low-tech cars over data-sharing ones • The Register

Also, we want buttons back too please.

Google agrees to settle $5 billion lawsuit accusing it of tracking Incognito users

In 2020, Google was hit with a lawsuit that accused it of tracking Chrome users’ activities even when they were using Incognito mode. Now, after a failed attempt to get it dismissed, the company has agreed to settle the complaint that originally sought $5 billion in damages. According to Reuters and The Washington Post, neither side has made the details of the settlement public, but they’ve already agreed to the terms that they’re presenting to the court for approval in February.

When the plaintiffs filed the lawsuit, they said Google used tools like its Analytics product, apps and browser plug-ins to monitor users. They reasoned that by tracking someone on Incognito, the company was falsely making people believe that they could control the information that they were willing to share with it. At the time, a Google spokesperson said that while Incognito mode doesn’t save a user’s activity on their device, websites could still collect their information during the session.

The lawsuit’s plaintiffs presented internal emails that allegedly showed conversations between Google execs proving that the company monitored Incognito browser usage to sell ads and track web traffic. Their complaint accused Google of violating federal wire-tapping and California privacy laws and was asking up to $5,000 per affected user. They claimed that millions of people who’d been using Incognito since 2016 had likely been affected, which explains the massive damages they were seeking from the company. Google has likely agreed to settle for an amount lower than $5 billion, but it has yet to reveal details about the agreement and has yet to get back to Engadget with an official statement.

Source: Google agrees to settle $5 billion lawsuit accusing it of tracking Incognito users

Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It – because no US enforcement of any kind

Half a decade ago we documented how the U.S. wireless industry was caught over-collecting sensitive user location and vast troves of behavioral data, then selling access to that data to pretty much anybody with a couple of nickels to rub together. It resulted in no limit of abuse from everybody from stalkers to law enforcement — and even to people pretending to be law enforcement.

While the FCC purportedly moved to fine wireless companies for this behavior, the agency still hasn’t followed through. Despite the obvious ramifications of this kind of behavior during a post-Roe, authoritarian era.

Nearly a decade later, and it’s still a very obvious problem. The folks over at 404 Media have documented the case of a stalker who managed to game Verizon in order to obtain sensitive data about his target, including her address, location data, and call logs.

Her stalker posed as a police officer (badly) and, as usual, Verizon did virtually nothing to verify his identity:

“Glauner’s alleged scheme was not sophisticated in the slightest: he used a ProtonMail account, not a government email, to make the request, and used the name of a police officer that didn’t actually work for the police department he impersonated, according to court records. Despite those red flags, Verizon still provided the sensitive data to Glauner.”

In this case, the stalker found it relatively trivial to take advantage of Verizon Security Assistance and Court Order Compliance Team (or VSAT CCT), which verifies law enforcement requests for data. You’d think that after a decade of very ugly scandals on this front Verizon would have more meaningful safeguards in place, but you’d apparently be wrong.

Keep in mind: the FCC tried to impose some fairly basic privacy rules for broadband and wireless in 2016, but the telecom industry, in perfect lockstep with Republicans, killed those efforts before they could take effect, claiming they’d be too harmful for the super competitive and innovative (read: not competitive or innovative at all) U.S. broadband industry.

[…]

Source: Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It | Techdirt

UK Police to be able to run AI face recognition searches on all driving licence holders

The police will be able to run facial recognition searches on a database containing images of Britain’s 50 million driving licence holders under a law change being quietly introduced by the government.

Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match.

The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

[…]

The intention to allow the police or the National Crime Agency (NCA) to exploit the UK’s driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is “sneaking it under the radar”.

Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish “driver information regulations” to enable the searches, but he will need only to consult police bodies, according to the bill.

Critics claim facial recognition technology poses a threat to the rights of individuals to privacy, freedom of expression, non-discrimination and freedom of assembly and association.

Police are increasingly using live facial recognition, which compares a live camera feed of faces against a database of known identities, at major public events such as protests.

Prof Peter Fussey, a former independent reviewer of the Met’s use of facial recognition, said there was insufficient oversight of the use of facial recognition systems, with ministers worryingly silent over studies that showed the technology was prone to falsely identifying black and Asian faces.

[…]

The EU had considered making images on its member states’ driving licence records available on the Prüm crime fighting database. The proposal was dropped earlier this year as it was said to represent a disproportionate breach of privacy.

[…]

Carole McCartney, a professor of law and criminal justice at the University of Leicester, said the lack of consultation over the change in law raised questions over the legitimacy of the new powers.

She said: “This is another slide down the ‘slippery slope’ of allowing police access to whatever data they so choose – with little or no safeguards. Where is the public debate? How is this legitimate if the public don’t accept the use of the DVLA and passport databases in this way?”

The government scrapped the role of the commissioner for the retention and use of biometric material and the office of surveillance camera commissioner this summer, leaving ministers without an independent watchdog to scrutinise such legislative changes.

[…]

In 2020, the court of appeal ruled that South Wales police’s use of facial recognition technology had breached privacy rights, data protection laws and equality laws, given the risk the technology could have a race or gender bias.

The force has continued to use the technology. Live facial recognition is to be deployed to find a match of people attending Christmas markets this year against a watchlist.

Katy Watts, a lawyer at the civil rights advocacy group Liberty said: “This is a shortcut to widespread surveillance by the state and we should all be worried by it.”

Source: Police to be able to run face recognition searches on 50m driving licence holders | Facial recognition | The Guardian

Internet Architecture Board hits out at US, EU, UK client-side scanning (spying on everything on your phone and pc all the time) plans – to save (heard it before?) kids

[…]

Apple brought widespread attention to this so-called client-side scanning in August 2021 when it announced plans to examine photos on iPhones and iPads before they were synced to iCloud, as a safeguard against the distribution of child sexual abuse material (CSAM). Under that plan, if someone’s files were deemed to be CSAM, the user could lose their iCloud account and be reported to the cops.

As the name suggests, client-side scanning involves software on a phone or some other device automatically analyzing files for unlawful photos and other content, and then performing some action – such as flagging or removing the documents or reporting them to the authorities. At issue, primarily, is the loss of privacy from the identification process – how will that work with strong encryption, and do the files need to be shared with an outside service? Then there’s the reporting process – how accurate is it, is there any human intervention, and what happens if your gadget wrongly fingers you to the cops?

The iGiant’s plan was pilloried by advocacy organizations and by customers on technical and privacy grounds. Ultimately Apple abandoned the effort and went ahead with offering iCloud encryption – a level of privacy that prompted political pushback at other tech titans.

Proposals for client-side scanning … mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the ‘net glued together –thinks that’s a bad idea.

“A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression,” the IAB declared in a statement just before the weekend.

[…]

Specifically, the IAB cites Europe’s planned “Regulation laying down rules to prevent and combat child sexual abuse” (2022/0155(COD)), the UK Online Safety Act of 2023, and the US Earn-It Act, all of which contemplate regulatory regimes that have the potential to require the decryption of encrypted content in support of mandated surveillance.

The administrative body acknowledges the social harm done through the distribution of illegal content on the internet and the need to protect internet users. But it contends indiscriminate surveillance is not the answer.

The UK has already passed its Online Safety Act legislation, which authorizes telecom watchdog Ofcom to demand decryption of communications on grounds of child safety – though government officials have admitted that’s not technically feasible at the moment.

Europe, under fire for concealing those who have consulted on client-side scanning, and the US appears to be heading down a similar path.

For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring.

“The IAB opposes technologies that foster surveillance as they weaken the user’s expectations of private communication which decreases the trust in the internet as the core communication platform of today’s society,” the organization wrote. “Mandatory client-side scanning creates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.”

[…]

Source: Internet Architecture Board hits out at client-side scanning • The Register

As soon as they take away privacy to save kids, you know they will expand the remit as governments have always done. The fact is that mass surveillance is not particularly effective, even with AI, except in making people feel watched and thus altering their behaviour. This feeling of always being spied upon is much much worse for whole generations of children than the tiny amount of sexual predators that may actually be caught.

Google Will Stop Telling Law Enforcement Which Users Were Near a Crime, start saving location data on the mobile device instead of their servers. But not really though. And Why?

So most of the breathless reporting on Googles “Updates to Location History and new controls coming soon to Maps” is a bit like this below. However Google itself in “Manage your Location History” says that if you have location history on, it will also save it to it’s servers. There is no mention of encryption.

Alphabet Inc.’s Google is changing its Maps tool so that the company no longer has access to users’ individual location histories, cutting off its ability to respond to law enforcement warrants that ask for data on everyone who was in the vicinity of a crime.

Google is changing its Location History feature on Google Maps, according to a blog post this week. The feature, which Google says is off by default, helps users remember where they’ve been. The company said Thursday that for users who have it enabled, location data will soon be saved directly on users’ devices, blocking Google from being able to see it, and, by extension, blocking law enforcement from being able to demand that information from Google.

“Your location information is personal,” said Marlo McGriff, director of product for Google Maps, in the blog post. “We’re committed to keeping it safe, private and in your control.”

The change comes three months after a Bloomberg Businessweek investigation that found police across the US were increasingly using warrants to obtain location and search data from Google, even for nonviolent cases, and even for people who had nothing to do with the crime.

“It’s well past time,” said Jennifer Lynch, the general counsel at the Electronic Frontier Foundation, a San Francisco-based nonprofit that defends digital civil liberties. “We’ve been calling on Google to make these changes for years, and I think it’s fantastic for Google users, because it means that they can take advantage of features like location history without having to fear that the police will get access to all of that data.”

Google said it would roll out the changes gradually through the next year on its own Android and Apple Inc.’s iOS mobile operating systems, and that users will receive a notification when the update comes to their account. The company won’t be able to respond to new geofence warrants once the update is complete, including for people who choose to save encrypted backups of their location data to the cloud.“It’s a good win for privacy rights and sets an example,” said Jake Laperruque, deputy director of the security and surveillance project at the Center for Democracy & Technology. The move validates what litigators defending the privacy of location data have long argued in court: that just because a company might hold data as part of its business operations, that doesn’t mean users have agreed the company has a right to share it with a third party.

Lynch, the EFF lawyer, said that while Google deserves credit for the move, it’s long been the only tech company that that the EFF and other civil-liberties groups have seen responding to geofence warrants. “It’s great that Google is doing this, but at the same time, nobody else has been storing and collecting data in the same way as Google,” she said. Apple, which also has an app for Maps, has said it’s technically unable to supply the sort of location data police want.

There’s still another kind of warrant that privacy advocates are concerned about: so-called reverse keyword search warrants, where police can ask a technology company to provide data on the people who have searched for a given term. “Search queries can be extremely sensitive, even if you’re just searching for an address,” Lynch said.

Source: Google Will Stop Telling Law Enforcement Which Users Were Near a Crime

The question is – why now? The market for location data is estimated at around $12 billion (source: There’s a Murky Multibillion-Dollar Market for Your Phone’s Location Data) If you look a tiny little bit, you see the government asking for it all the time, and the fines issued for breaching location data privacy seem to be tiny compared to the money made by selling it.

Google will be changing the name of Location History as well to Timeline – and will be saving your location to it’s servers (see heading When Location History is on)

:

Manage your Location History

In the coming months, the Location History setting name will change to Timeline. If Location History is turned on for your account, you may find Timeline in your app and account settings.

Location History is a Google Account setting that creates Timeline, a personal map that helps you remember:

  • Places you go
  • Routes to destinations
  • Trips you take

It can also give you personalized experiences across Google based on where you go.

When Location History is on, even when Google apps aren’t in use, your precise device location is regularly saved to:

  • Your devices
  • Google servers

To make Google experiences helpful for everyone, we may use your data to:

  • Show information based on anonymized location data, such as:
    • Popular times
    • Environmental insights
  • Detect and prevent fraud and abuse.
  • Improve and develop Google services, such as ads products.
  • Help businesses determine if people visit their stores because of an ad, if you have Web & App Activity turned on.
    • We share only anonymous estimates, not personal data, with businesses.
    • This activity can include info about your location from your device’s general area and IP address.

Learn more about how Google uses location data.

Things to know about Location History:

  • Location History is off by default. We can only use it if you turn Location History on.
  • You can turn off Location History at any time in your Google Account’s Activity controls.
  • You can review and manage your Location History. You can:
    • Review places you’ve been in Google Maps Timeline.
    • Edit or delete your Location History anytime.

Important: Some of these steps work only on Android 8.0 and up. Learn how to check your Android version.

Turn Location History on or off

You can turn off Location History for your account at any time. If you use a work or school account, your administrator needs to make this setting available for you. If they do, you’ll be able to use Location History as any other user.

  1. Go to the “Location History” section of your Google Account.
  2. Choose whether your account or your devices can report Location History to Google.
    • Your account and all devices: At the top, turn Location History on or off.
    • Only a certain device: Under “This device” or “Devices on this account,” turn the device on or off.

When Location History is on

Google can estimate your location with:

  • Signals like Wi-Fi and mobile networks
  • GPS
  • Sensor information

Your device location may also periodically be used in the background. When Location History is on, even when Google apps aren’t in use, your device’s precise location is regularly saved to:

  • Your devices
  • Google servers

When you’re signed in with your Google Account, it saves the Location History of each device with the setting “Devices on this account” turned on You can find this setting in the Location History settings on your Google Account.

You can choose which devices provide their location data to Location History. Your settings don’t change for other location services on your device, such as:

When Location History is off

Your device doesn’t save its location to your Location History.

  • You may have previous Location History data in your account. You can manually delete it anytime.
  • Your settings don’t change for other location services on your device, such as:
  • If settings like Web and App Activity are on but you turn off Location History or delete location data from Location History, your Google Account may still save location data as part of your use of other Google sites, apps, and services. This activity can include info about your location from your device’s general area and IP address.

Delete Location History

You can manage and delete your Location History information with Google Maps Timeline. You can choose to delete all of your history, or only parts of it.

Important: When you delete Location History information from Timeline, you won’t be able to see it again.

Automatically delete your Location History

You can choose to automatically delete Location History that’s older than 3 months, 18 months, or 36 months.

What happens after you delete some or all Location History

If you delete some or all of your Location History, personalized experiences across Google may degrade or or be lost. For example, you may lose:

  • Recommendations based on places you visit
  • Real-time information about when best to leave for home or work to beat traffic

Important: If you have other settings like Web & App Activity turned on and you pause Location History or delete location data from Location History, you may still have location data saved in your Google Account as part of your use of other Google sites, apps, and services. For example, location data may be saved as part of activity on Search and Maps when your Web & App Activity setting is on, and included in your photos depending on your camera app settings. Web & App Activity can include info about your location from your device’s general area and IP address.

Learn about use & diagnostics for Location History

After you turn on Location History, your device may send diagnostic information to Google about what works or doesn’t work for Location History. Google processes any information it collects under Google’s privacy policy.

 

Learn more about other location settings

Source: Manage your Location History

 

 

US Law enforcement can obtain prescription records from pharmacy giants without a warrant

America’s eight largest pharmacy providers shared customers’ prescription records to law enforcement when faced with subpoena requests, The Washington Post reported Tuesday. The news arrives amid patients’ growing privacy concerns in the wake of the Supreme Court’s 2022 overturn of Roe v. Wade.

The new look into the legal workarounds was first detailed in a letter sent by Sen. Ron Wyden (D-OR) and Reps. Pramila Jayapal (D-WA) and Sara Jacobs (D-CA) on December 11 to the secretary of the Department of Health and Human Services.

Pharmacies can hand over detailed, potentially compromising information due to legal fine print. Health Insurance Portability and Accountability Act (HIPAA) regulations restrict patient data sharing between “covered entities” like doctor offices, hospitals, and other medical facilities—but these guidelines are looser for pharmacies. And while search warrants require a judge’s approval to serve, subpoenas do not.

[…]

Given each company’s national network, patient records are often shared interstate between any pharmacy location. This could become legally fraught for medical history access within states that already have—or are working to enact—restrictive medical access laws. In an essay written for The Yale Law Journal last year, cited by WaPo, University of Connecticut associate law professor Carly Zubrzycki argued, “In the context of abortion—and other controversial forms of healthcare, like gender-affirming treatments—this means that cutting-edge legislative protections for medical records fall short.”

[…]

Source: Law enforcements can obtain prescription records from pharmacy giants without a warrant | Popular Science

Proposed US surveillance regime makes anyone with a modem a big brother spy. Choice is between full on spying and full on spying.

Under rules being considered, any telecom service provider or business with custodial access to telecom equipment – a hotel IT technician, an employee at a cafe with Wi-Fi, or a contractor responsible for installing home broadband router – could be compelled to enable electronic surveillance. And this would apply not only to those involved with data transit and data storage.

This week, the US House of Representatives is expected to conduct a floor vote on two bills that reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA), which is set to expire in 2024.

Section 702, as The Register noted last week, permits US authorities to intercept the electronic communications of people outside the US for foreign intelligence purposes – without a warrant – even if that communication involves US citizens and permanent residents.

As the Electronic Frontier Foundation argues, Section 702 has allowed the FBI to conduct invasive, warrantless searches of protesters, political donors, journalists, protesters, and even members of Congress.

More than a few people would therefore be perfectly happy if the law lapsed – on the other hand, law enforcement agencies insist they need Section 702 to safeguard national security.

The pending vote is expected to be conducted under “Queen-of-the-Hill Rules,” which in this instance might also be described as “Thunderdome” – two bills enter, one bill leaves, with the survivor advancing to the US Senate for consideration. The prospect that neither would be approved and Section 702 would lapse appears … unlikely.

The two bills are: HR 6570, the Protect Liberty and End Warrantless Surveillance Act; and HR 6611, the FISA Reform and Reauthorization Act (FRRA) of 2023 (FRRA).

The former reauthorizes Section 702, but with strong civil liberties and privacy provisions. The civil rights community has lined up to support it.

As for the latter, Elizabeth Goitein, co-director of the Liberty and National Security Program at legal think tank the Brennan Center for Justice, explained that the FRRA changes the definition of electronic communication service provider (ECSP) in a way that expands the range of businesses required to share data with the US.

“Going forward, it would not just be entities that have direct access to communications, like email and phone service providers, that could be required to turn over communications,” argues a paper prepared by the Brennan Center. “Any business that has access to ‘equipment’ on which communications are stored and transmitted would be fair game.”

According to Goitein, the bill’s sponsors have denied the language is intended to be interpreted so broadly.

A highly redacted FISA Court of Review opinion [PDF], released a few months ago, showed that the government has already pushed the bounds of the definition.

The court document discussed a petition to compel an unidentified entity to conduct surveillance. The petition was denied because the entity did not satisfy the definition of “electronic communication service provider,” and was instead deemed to be a provider of a product or service. That definition may change, it seems.

Goitein is not alone in her concern about the ECSP definition. She noted that a FISA Court amici – the law firm ZwillGen – has taken the unusual step of speaking out against the expanded definition of an ECSP.

In an assessment published last week, ZwillGen attorneys Marc Zwillinger and Steve Lane raised concerns about the FRRA covering a broad set of businesses and their employees.

“By including any ‘service provider’ – rather than any ‘other communication service provider’ – that has access not just to communications, but also to the ‘equipment that is being or may be used to transmit or store … communications,’ the expanded definition would appear to cover datacenters, colocation providers, business landlords, shared workspaces, or even hotels where guests connect to the internet,” they explained. They added that the addition of the term “custodian” to the service provider definition makes it apply to any third party providing equipment, storage – or even cleaning services.

The Brennan Center paper also raised other concerns – like the exemption for members of Congress from such surveillance. The FRRA bill requires the FBI to get permission from a member of Congress when it wants to conduct a query of their communications. No such courtesy is afforded to the people these members of Congress represent.

Goitein urged Americans to contact their representative and ask for a “no” vote on the FRRA and a “yes” on HR 6570, the Protect Liberty and End Warrantless Surveillance Act. ®

Source: Proposed US surveillance regime would enlist more businesses • The Register

Bad genes: 23andMe leak highlights a possible future of genetic discrimination

23andMe is a terrific concept. In essence, the company takes a sample of your DNA and tells you about your genetic makeup. For some of us, this is the only way to learn about our heritage. Spotty records, diaspora, mistaken family lore and slavery can make tracing one’s roots incredibly difficult by traditional methods.

What 23andMe does is wonderful because your DNA is fixed. Your genes tell a story that supersedes any rumors that you come from a particular country or are descended from so-and-so.

[…]

ou can replace your Social Security number, albeit with some hassle, if it is ever compromised. You can cancel your credit card with the click of a button if it is stolen. But your DNA cannot be returned for a new set — you just have what you are given. If bad actors steal or sell your genetic information, there is nothing you can do about it.

This is why 23andMe’s Oct. 6 data leak, although it reads like science fiction, is not an omen of some dark future. It is, rather, an emblem of our dangerous present.

23andMe has a very simple interface with some interesting features. “DNA Relatives” matches you with other members to whom you are related. This could be an effective, thoroughly modern way to connect with long-lost family, or to learn more about your origins.

But the Oct. 6 leak perverted this feature into something alarming. By gaining access to individual accounts through weak and recycled passwords, hackers were able to create an extensive list of people with Ashkenazi heritage. This list was then posted on forums with the names, sex and likely heritage of each member under the title “Ashkenazi DNA Data of Celebrities.”

First and foremost, collecting lists of people based on their ethnic backgrounds is a personal violation with tremendously insidious undertones. If you saw yourself and your extended family on such a list, you would not take it lightly.

[…]

I find it troubling because, in 2018, Time reported that 23andMe had sold a $300 million stake in its business to GlaxoSmithKline, allowing the pharmaceutical giant to use users’ genetic data to develop new drugs. So because you wanted to know if your grandmother was telling the truth about your roots, you spat into a cup and paid 23andMe to give your DNA to a drug company to do with it as they please.

Although 23andMe is in the crosshairs of this particular leak, there are many companies in murky waters. Last year, Consumer Reports found that 23andMe and its competitors had decent privacy policies where DNA was involved, but that these businesses “over-collect personal information about you and overshare some of your data with third parties…CR’s privacy experts say it’s unclear why collecting and then sharing much of this data is necessary to provide you the services they offer.”

[…]

As it stands, your DNA can be weaponized against you by law enforcement, insurance companies, and big pharma. But this will not be limited to you. Your DNA belongs to your whole family.

Pretend that you are going up against one other candidate for a senior role at a giant corporation. If one of these genealogy companies determines that you are at an outsized risk for a debilitating disease like Parkinson’s and your rival is not, do you think that this corporation won’t take that into account?

[…]

Insurance companies are not in the business of losing money either. If they gain access to such a thing that on your record, you can trust that they will use it to blackball you or jack up your rates.

In short, the world risks becoming like that of the film Gattaca, where the genetic elite enjoy access while those deemed genetically inferior are marginalized.

The train has left the station for a lot of these issues. That list of people from the 23andMe leak cannot put the genie back in the bottle. If your DNA is on a server for one of these companies, there is a chance that it has already been used as a reference or to help pharmaceutical companies.

[…]

There are things they can do now to avoid further damage. The next time a company asks for something like your phone number or SSN, press them as to why they need it. Make it inconvenient for them to mine you for your Personal Identifiable Information (PII). Your PII has concrete value to these places, and they count on people to be passive, to hand it over without any fuss.

[…]

The time to start worrying about this problem was 20 years ago, but we can still affect positive change today. This 23andMe leak is only the beginning; we must do everything possible to protect our identities and DNA while they still belong to us.

Source: Bad genes: 23andMe leak highlights a possible future of genetic discrimination | The Hill

Scientific American was warning about this since at least 2013. What have we done? Nothing.:

If there’s a gene for hubris, the 23andMe crew has certainly got it. Last Friday the U.S. Food and Drug Administration (FDA) ordered the genetic-testing company immediately to stop selling its flagship product, its $99 “Personal Genome Service” kit. In response, the company cooed that its “relationship with the FDA is extremely important to us” and continued hawking its wares as if nothing had happened. Although the agency is right to sound a warning about 23andMe, it’s doing so for the wrong reasons.

Since late 2007, 23andMe has been known for offering cut-rate genetic testing. Spit in a vial, send it in, and the company will look at thousands of regions in your DNA that are known to vary from human to human—and which are responsible for some of our traits

[…]

Everything seemed rosy until, in what a veteran Forbes reporter calls “the single dumbest regulatory strategy [he had] seen in 13 years of covering the Food and Drug Administration,” 23andMe changed its strategy. It apparently blew through its FDA deadlines, effectively annulling the clearance process, and abruptly cut off contact with the agency in May. Adding insult to injury the company started an aggressive advertising campaign (“Know more about your health!”)

[…]

But as the FDA frets about the accuracy of 23andMe’s tests, it is missing their true function, and consequently the agency has no clue about the real dangers they pose. The Personal Genome Service isn’t primarily intended to be a medical device. It is a mechanism meant to be a front end for a massive information-gathering operation against an unwitting public.

Sound paranoid? Consider the case of Google. (One of the founders of 23andMe, Anne Wojcicki, is presently married to Sergei Brin, the founder of Google.) When it first launched, Google billed itself as a faithful servant of the consumer, a company devoted only to building the best tool to help us satisfy our cravings for information on the web. And Google’s search engine did just that. But as we now know, the fundamental purpose of the company wasn’t to help us search, but to hoard information. Every search query entered into its computers is stored indefinitely. Joined with information gleaned from cookies that Google plants in our browsers, along with personally identifiable data that dribbles from our computer hardware and from our networks, and with the amazing volumes of information that we always seem willing to share with perfect strangers—even corporate ones—that data store has become Google’s real asset

[…]

23andMe reserves the right to use your personal information—including your genome—to inform you about events and to try to sell you products and services. There is a much more lucrative market waiting in the wings, too. One could easily imagine how insurance companies and pharmaceutical firms might be interested in getting their hands on your genetic information, the better to sell you products (or deny them to you).

[…]

ven though 23andMe currently asks permission to use your genetic information for scientific research, the company has explicitly stated that its database-sifting scientific work “does not constitute research on human subjects,” meaning that it is not subject to the rules and regulations that are supposed to protect experimental subjects’ privacy and welfare.

Those of us who have not volunteered to be a part of the grand experiment have even less protection. Even if 23andMe keeps your genome confidential against hackers, corporate takeovers, and the temptations of filthy lucre forever and ever, there is plenty of evidence that there is no such thing as an “anonymous” genome anymore. It is possible to use the internet to identify the owner of a snippet of genetic information and it is getting easier day by day.

This becomes a particularly acute problem once you realize that every one of your relatives who spits in a 23andMe vial is giving the company a not-inconsiderable bit of your own genetic information to the company along with their own. If you have several close relatives who are already in 23andMe’s database, the company already essentially has all that it needs to know about you.

[…]

Source: 23andMe Is Terrifying, but Not for the Reasons the FDA Thinks

Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet’s (GOOGL.O) Google and Apple (AAPL.O). Although details were sparse, the letter lays out yet another path by which governments can track smartphones.

Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible “dings” or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple’s servers.

That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them “in a unique position to facilitate government surveillance of how users are using particular apps,” Wyden said. He asked the Department of Justice to “repeal or modify any policies” that hindered public discussions of push notification spying.

In a statement, Apple said that Wyden’s letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.

“In this case, the federal government prohibited us from sharing any information,” the company said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”

Google said that it shared Wyden’s “commitment to keeping users informed about these requests.”

The Department of Justice did not return messages seeking comment on the push notification surveillance or whether it had prevented Apple of Google from talking about it.

Wyden’s letter cited a “tip” as the source of the information about the surveillance. His staff did not elaborate on the tip, but a source familiar with the matter confirmed that both foreign and U.S. government agencies have been asking Apple and Google for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.

The source declined to identify the foreign governments involved in making the requests but described them as democracies allied to the United States.

The source said they did not know how long such information had been gathered in that way.

Most users give push notifications little thought, but they have occasionally attracted attention from technologists because of the difficulty of deploying them without sending data to Google or Apple.

Earlier this year French developer David Libeau said users and developers were often unaware of how their apps emitted data to the U.S. tech giants via push notifications, calling them “a privacy nightmare.”

Source: Governments spying on Apple, Google users through push notifications – US senator | Reuters

Alternative browsers about to die? Firefox may soon be delisted in the US govt support matrix :'(

A somewhat obscure guideline for developers of U.S. government websites may be about to accelerate the long, sad decline of Mozilla’s Firefox browser. There already are plenty of large entities, both public and private, whose websites lack proper support for Firefox; and that will get only worse in the near future, because the ’fox’s auburn paws are perilously close to the lip of the proverbial slippery slope.

The U.S. Web Design System (USWDS) provides a comprehensive set of standards which guide those who build the U.S. government’s many websites. Its documentation for developers borrows a “2% rule” from its British counterpart:

. . . we officially support any browser above 2% usage as observed by analytics.usa.gov.

At this writing, that analytics page shows the following browser traffic for the previous ninety days:

BrowserShare
Chrome49%
Safari34.8%
Edge8.4%
Firefox2.2%
Safari (in-app)1.9%
Samsung Internet1.6%
Android Webview1%
Other1%

I am personally unaware of any serious reason to believe that Firefox’s numbers will improve soon. Indeed, for the web as a whole, they’ve been declining consistently for years, as this chart shows:

Chart of browser share for January, 2009, through November, 2023

Chrome vs. Firefox vs. Safari for January, 2009, through November, 2023.
Image: StatCounter.

Firefox peaked at 31.82% in November, 2009 — and then began its long slide in almost direct proportion to the rise of Chrome. The latter shot from 1.37% use in January, 2009, to its own peak of 66.34% in September, 2020, since falling back to a “measly” 62.85% in the very latest data.1

While these numbers reflect worldwide trends, the U.S.-specific picture isn’t really better. In fact, because the iPhone is so popular in the U.S. — which is obvious from what you see on that aforementioned government analytics page — Safari pulls large numbers that also hurt Firefox.

[…]

Firefox is quickly losing “web space,” thanks to a perfect storm that’s been kicked up by the dominance of Chrome, the popularity of mobile devices that run Safari by default, and many corporate and government IT shops’ insistence that their users rely on only Microsoft’s Chromium-based Edge browser while toiling away each day.

With such a continuing free-fall, Firefox is inevitably nearing the point where USWDS will remove it, like Internet Explorer before it, from the list of supported browsers.

[…]

Source: Firefox on the brink? The Big Three may effectively be down to a Big Two, and right quick.

Competition is important, especially in the world of browsers, which are our window into far and away most of the internet. Allowing one browser to rule them all leads to some very strange and nasty stuff. Not only do they no longer follow (W3C) standards (which IE and Chrome didn’t and don’t), but they start taking extreme liberties with your privacy (a “privacy sandbox” that allows any site to query all your habits!), pick on certain websites and even edit what you see, send your passwords and other personal data to third party sites, share your motion data, refuse to delete private data on you, etc etc etc

Firefox is a very good browser with some awesome addons – and not beholden to the Google or Microsoft or Apple overlords. And it’s the only private one offering you a real choice outside of the Chromium reach.

Automakers’ data privacy practices “are unacceptable,” says US senator

US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers’ approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better.

As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread—most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health.

Sen. Markey is also worried about automakers’ use of Bluetooth, which he said has expanded “their surveillance to include information that has nothing to do with a vehicle’s operation, such as data from smartphones that are wirelessly connected to the vehicle.”

“These practices are unacceptable,” Markey wrote. “Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not—and cannot—become yet another venue where privacy takes a backseat.”

The 14 automakers have until December 21 to answer the following questions:

  • Does your company collect user data from its vehicles, including but not limited to the actions, behaviors, or personal information of any owner or user?
    • If so, please describe how your company uses data about owners and users collected from its vehicles. Please distinguish between data collected from users of your vehicles and data collected from those who sign up for additional services.
    • Please identify every source of data collection in your new model vehicles, including each type of sensor, interface, or point of collection from the individual and the purpose of that data collection.
    • Does your company collect more information than is needed to operate the vehicle and the services to which the individual consents?
    • Does your company collect information from passengers or people outside the vehicle? If so, what information and for what purposes?
    • Does your company sell, transfer, share, or otherwise derive commercial benefit from data collected from its vehicles to third parties? If so, how much did third parties pay your company in 2022 for that data?
    • Once your company collects this user data, does it perform any categorization or standardization procedures to group the data and make it readily accessible for third-party use?
    • Does your company use this user data, or data on the user acquired from other sources, to create user profiles of any sort?
    • How does your company store and transmit different types of data collected on the vehicle? Do your company’s vehicles include a cellular connection or Wi-Fi capabilities for transmitting data from the vehicle?
  • Does your company provide notice to vehicle owners or users of its data practices?
  • Does your company provide owners or users an opportunity to exercise consent with respect to data collection in its vehicles?
    • If so, please describe the process by which a user is able to exercise consent with respect to such data collection. If not, why not?
    • If users are provided with an opportunity to exercise consent to your company’s services, what percentage of users do so?
    • Do users lose any vehicle functionality by opting out of or refusing to opt in to data collection? If so, does the user lose access only to features that strictly require such data collection, or does your company disable features that could otherwise operate without that data collection?
  • Can all users, regardless of where they reside, request the deletion of their data? If so, please describe the process through which a user may delete their data. If not, why not?
  • Does your company take steps to anonymize user data when it is used for its own purposes, shared with service providers, or shared with non-service provider third parties? If so, please describe your company’s process for anonymizing user data, including any contractual restrictions on re-identification that your company imposes.
  • Does your company have any privacy standards or contractual restrictions for the third-party software it integrates into its vehicles, such as infotainment apps or operating systems? If so, please provide them. If not, why not?
  • Please describe your company’s security practices, data minimization procedures, and standards in the storage of user data.
    • Has your company suffered a leak, breach, or hack within the last ten years in which user data was compromised?
    • If so, please detail the event(s), including the nature of your company’s system that was exploited, the type and volume of data affected, and whether and how your company notified its impacted users.
    • Is all the personal data stored on your company’s vehicles encrypted? If not, what personal data is left open and unprotected? What steps can consumers take to limit this open storage of their personal information on their cars?
  • Has your company ever provided to law enforcement personal information collected by a vehicle?
    • If so, please identify the number and types of requests that law enforcement agencies have submitted and the number of times your company has complied with those requests.
    • Does your company provide that information only in response to a subpoena, warrant, or court order? If not, why not?
  • Does your company notify the vehicle owner when it complies with a request?

Source: Automakers’ data privacy practices “are unacceptable,” says US senator | Ars Technica

The UK tries, once again, to age-gate pornography, keep a list of porn watchers

UK telecoms regulator Ofcom has laid out how porn sites could verify users’ ages under the newly passed Online Safety Act. Although the law gives sites the choice of how they keep out underage users, the regulator is publishing a list of measures they’ll be able to use to comply. These include having a bank or mobile network confirm that a user is at least 18 years old (with that user’s consent) or asking a user to supply valid details for a credit card that’s only available to people who are 18 and older. The regulator is consulting on these guidelines starting today and hopes to finalize its official guidance in roughly a year’s time.

The measures have the potential to be contentious and come a little over four years after the UK government scrapped its last attempt to mandate age verification for pornography. Critics raised numerous privacy and technical concerns with the previous approach, and the plans were eventually shelved with the hope that the Online Safety Act (then emerging as the Online Harms White Paper) would offer a better way forward. Now we’re going to see if that’s true, or if the British government was just kicking the can down the road.

[…]

Ofcom lists six age verification methods in today’s draft guidelines. As well as turning to banks, mobile networks, and credit cards, other suggested measures include asking users to upload photo ID like a driver’s license or passport, or for sites to use “facial age estimation” technology to analyze a person’s face to determine that they’ve turned 18. Simply asking a site visitor to declare that they’re an adult won’t be considered strict enough.

Once the duties come into force, pornography sites will be able to choose from Ofcom’s approaches or implement their own age verification measures so long as they’re deemed to hit the “highly effective” bar demanded by the Online Safety Act. The regulator will work with larger sites directly and keep tabs on smaller sites by listening to complaints, monitoring media coverage, and working with frontline services. Noncompliance with the Online Safety Act can be punished with fines of up to £18 million (around $22.7 million) or 10 percent of global revenue (whichever is higher).

[…]

“It is very concerning that Ofcom is solely relying upon data protection laws and the ICO to ensure that privacy will be protected,” ORG program manager Abigail Burke said in a statement. “The Data Protection and Digital Information Bill, which is progressing through parliament, will seriously weaken our current data protection laws, which are in any case insufficient for a scheme this intrusive.”

“Age verification technologies for pornography risk sensitive personal data being breached, collected, shared, or sold. The potential consequences of data being leaked are catastrophic and could include blackmail, fraud, relationship damage, and the outing of people’s sexual preferences in very vulnerable circumstances,” Burke said, and called for Ofcom to set out clearer standards for protecting user data.

There’s also the risk that any age verification implemented will end up being bypassed by anyone with access to a VPN.

[…]

Source: The UK tries, once again, to age-gate pornography – The Verge

1. Age verification doesn’t work

2. Age verification doesn’t work

3. Age verification doesn’t work

4. Really, having to register as a porn watcher and then have your name in a leaky database?!

FBI Director Admits Agency Rarely Has Probable Cause When It Performs Backdoor Searches Of NSA Collections

After years of continuous, unrepentant abuse of surveillance powers, the FBI is facing the real possibility of seeing Section 702 curtailed, if not scuttled entirely.

Section 702 allows the NSA to gather foreign communications in bulk. The FBI benefits from this collection by being allowed to perform “backdoor” searches of NSA collections to obtain communications originating from US citizens and residents.

There are rules to follow, of course. But the FBI has shown little interest in adhering to these rules, just as much as the NSA has shown little interest in curtailing the amount of US persons’ communications “incidentally” collected by its dragnet.

[…]

Somehow, the FBI director managed to blurt out what everyone was already thinking: that the FBI needs this backdoor access because it almost never has the probable cause to support the search warrant normally needed to access the content of US persons’ communications.

A warrant requirement would amount to a de facto ban, because query applications either would not meet the legal standard to win court approval; or because, when the standard could be met, it would be so only after the expenditure of scarce resources, the submission and review of a lengthy legal filing, and the passage of significant time — which, in the world of rapidly evolving threats, the government often does not have,” Wray said. 

Holy shit. He just flat-out admitted it: a majority of FBI searches of US persons’ communications via Section 702 are unsupported by probable cause

[…]

Unfortunately, both the FBI and the current administration are united in their desire to keep this executive authority intact. Both Wray and the Biden administration call the warrant requirement a “red line.” So, even if the House decides it needs to go (for mostly political reasons) and/or Wyden’s reform bill lands on the President’s desk, odds are the FBI will get its wish: warrantless access to domestic communications for the foreseeable future.

Source: FBI Director Admits Agency Rarely Has Probable Cause When It Performs Backdoor Searches Of NSA Collections | Techdirt

US government pays AT&T to let cops search phone records without warrant

A senator has alleged that American law enforcement agencies snoop on US citizens and residents, seemingly without regard for the privacy provisions of the Fourth Amendment, under a secret program called the Hemisphere Project that allows police to conduct searches of trillions of phone records.

According to Senator Ron Wyden (D-OR), these searches “usually” happen without warrants. And after more than a decade of keeping people — lawmakers included — in the dark about Hemisphere, Wyden wants the Justice Department to reveal information about what he called a “long-running dragnet surveillance program.”

“I have serious concerns about the legality of this surveillance program, and the materials provided by the DoJ contain troubling information that would justifiably outrage many Americans and other members of Congress,” Wyden wrote in a letter [PDF] to US Attorney General Merrick Garland.

Under Hemisphere, the White House Office of National Drug Control Policy (ONDCP) pays telco AT&T to provide all federal, state, local, and tribal law enforcement agencies with the ability to request searches of trillions of domestic phone records dating back to at least 1987, plus the four billion call records added every day.

[…]

Hemisphere first came to light in a 2013 New York Times report that alleged the “scale and longevity of the data storage appears to be unmatched by other government programs, including the NSA’s gathering of phone call logs under the Patriot Act.”

It’s not classified, but that doesn’t mean the Feds want you to see it

Privacy advocates including the Electronic Frontier Foundations have filed Freedom of Information Act and state-level public records lawsuits to learn more about the secret snooping program.

Few have made a dent: it appears that the Feds are doing everything they can to keep Hemisphere secret.

Although the program and its documents are not classified, the Justice Department has marked them as “Law Enforcement Sensitive,” meaning their disclosure could hurt ongoing investigations. This designation also prevents the documents from being publicly released.

Senator Wyden wants the designation removed.

Additionally, Hemisphere is not subject to a federal Privacy Impact Assessment due to its funding structure, it’s claimed. The White House doesn’t directly pay AT&T – instead the ONDCP provides a grant to the Houston High Intensity Drug Trafficking Area, which is a partnership between federal, state, and local law enforcement agencies. And this partnership, in turn, pays AT&T to operate this surveillance scheme.

[…]

Source: US government pays AT&T to let cops search phone records • The Register

The Oura Ring Is a $300 Sleep Tracker Suddenly needs a Subscription

[…] Now in its third iteration, the Oura Ring tracks and analyzes a host of metrics, including your heart-rate variability (HRV), blood oxygen rate, body temperature, and sleep duration. It uses this data to give you three daily scores, tallying the quality of your sleep, activity, and “readiness.” It can also determine your chronotype (your body’s natural preferences for sleep or wakefulness), give insight into hormonal factors that can affect your sleep, and (theoretically) alert you when you’re getting sick.

I wore the Oura Ring for six months; it gave me tons of data about myself and helped me pinpoint areas in my sleep and health that I could improve. It’s also more comfortable and discreet to wear than most wristband wearable trackers.

However, the ring costs about $300 or more, depending on the style and finish, and Oura’s app now requires a roughly $72 yearly subscription to access most of the data and reports.

(Oura recently announced that the cost of the ring is eligible for reimbursement through a flexible spending account [FSA] or health spending account [HSA]. The subscription is not.)

If you just want to track your sleep cycles and get tips, a free (or modestly priced) sleep-tracking app may do the trick.

[…]

Source: The Oura Ring Is a $300 Sleep Tracker That Provides Tons of Data. But Is It Worth It? | Reviews by Wirecutter

So what do you get with the membership?

  • In-depth sleep analysis, every morning
  • Personalized health insights, 24/7
  • Live & accurate heart rate monitoring
  • Body temperature readings for early illness detection and period prediction (in beta)
  • Workout Heart Rate Tracking
  • Sp02 Monitoring
  • Rest Mode
  • Bedtime Guidance
  • Track More Movement
  • Restorative Time
  • Trends Over Time
  • Tags
  • Insights from Audio Sessions

And what if you want to continue for free?

Non-paying members have access to 3 simple daily scores: Sleep, Readiness, and Activity, as well as our interactive and educational Explore content.

Source: More power to you with Oura Membership.

This is a pretty stunning turn of events:

one because it was supposed to be the privacy friendly option, so what data are they sending to central servers and why (that’s the only way they can justify a subscription) and

two why is data that doesn’t need to be sent to the servers not being shown in the free version of the app?!

For the price of the ring this is a pretty shameless money grab.

The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected. In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.

End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.

Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.

The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected.

In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list “clearly fell within the scope” of the Irish Council for Civil Liberties’ request. 

If you’re not familiar with the reference, we’ll get you up to speed.

22 Short Films About Springfield is an episode of “The Simpsons” that originally aired in 1996. One particular “film” has become an internet meme legend: the one dealing with Principal Seymour Skinner’s attempt to impress his boss (Superintendent Chalmers) with a home-cooked meal.

One thing leads to another (and by one thing to another, I mean a fire in the kitchen as Skinner attempts to portray fast-food burgers as “steamed hams” and not the “steamed clams” promised earlier). That culminates in this spectacular cover-up by Principal Skinner when the superintendent asks about the extremely apparent fire occurring in the kitchen:

Principal Skinner: Oh well, that was wonderful. A good time was had by all. I’m pooped.

Chalmers: Yes. I should be– Good Lord! What is happening in there?

Principal Skinner: Aurora borealis.

Chalmers: Uh- Aurora borealis. At this time of year, at this time of day, in this part of the country, localized entirely within your kitchen?

Principal Skinner: Yes.

Chalmers [meekly]: May I see it?

Principal Skinner: No.

That is what happened here. Everyone opposing the EU Commission’s CSAM (i.e., “chat control”) efforts trotted out their experts, making it clearly apparent who was saying what and what their relevant expertise was. The EU insisted it had its own battery of experts. The ICCL said: “May we see them?”

The EU Commission: No.

Not good enough, said the ICCL. But that’s what a rights advocate would be expected to say. What’s less expected is the EU Commission’s ombudsman declaring the ICCL had the right to see this particularly specific aurora borealis.

After the Commission acknowledged to the EU Ombudsman that it, in fact, had such a list, but failed to disclose its existence to Dr Kris Shrishak, the Ombudsman held the Commission’s behaviour constituted “maladministration”.  

The Ombudsman held: “[t]he Commission did not identify the list of experts as falling within the scope of the complainant’s request. This means that the complainant did not have the opportunity to challenge (the reasons for) the institution’s refusal to disclose the document. This constitutes maladministration.” 

As the report further notes, the only existing documentation of this supposed consultation with experts has been reduced to a single self-serving document issued by the EU Commission. Any objections or interjections were added/subtracted as preferred by the EU Commission before presenting a “final” version that served its preferences. Any supporting documentation, including comments from participating stakeholders, were sent to the digital shredder.

As concerns the EUIF meetings, the Commission representatives explained that three online technical workshops took place in 2020. During the first workshop, academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion. After this workshop, a first draft of the ‘outcome document’ was produced, which summarises the input given orally by the participants and references a number of relevant documents. This first draft was shared with the participants via an online file sharing service and some participants provided written comments. Other participants commented orally on the first draft during the second workshop. Those contributions were then added to the final version of the ‘outcome document’ that was presented during the third and final workshop for the participants’ endorsement. This ‘outcome document’ is the only document that was produced in relation to the substance of these workshops. It was subsequently shared with the EUIF. One year later, it was used as supporting information to the impact assessment report.

In other words, the EU took what it liked and included it. The rest of it disappeared from the permanent record, supposedly because the EU Commission routinely purges any email communications more than two years old. This is obviously ridiculous in this context, considering this particular piece of legislation has been under discussion for far longer than that.

But, in the end, the EU Commission wins because it’s the larger bureaucracy. The ombudsman refused to issue a recommendation. Instead, it instructs the Commission to treat the ICCL’s request as “new” and perform another search for documents. “Swiftly.” Great, as far as that goes. But it doesn’t go far. The ombudsman also says it believes the EU Commission when it says only its version of the EUIF report survived the periodic document cull.

In the end, all that survives is this: the EU consulted with affected entities. It asked them to comment on the proposal. It folded those comments into its presentation. It likely presented only comments that supported its efforts. Dissenting opinions were auto-culled by EU Commission email protocols. It never sought further input, despite having passed the two-year mark without having converted the proposal into law. All that’s left, the ombudsman says, is likely a one-sided version of the Commission’s proposal. And if the ICCL doesn’t like it, well… it will have to find some other way to argue with the “experts” the Commission either ignored or auto-deleted. The government wins, even without winning arguments. Go figure.

Source: Steamed Hams, Except It’s The EU Commission’s Alleged CSAM Regulation ‘Experts’ | Techdirt

Decoupling for IT Security (=privacy)

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents.

The risks we face from the cloud today are not an accident. For Google to show you your work emails, it has to store many copies across many servers. Even if they’re stored in encrypted form, Google must decrypt them to display your inbox on a webpage. When Zoom coordinates a call, its servers receive and then retransmit the video and audio of all the participants, learning who’s talking and what’s said. For Apple to analyze and share your photo album, it must be able to access your photos.

Hacks of cloud services happen so often that it’s hard to keep up. Breaches can be so large as to affect nearly every person in the country, as in the Equifax breach of 2017, or a large fraction of the Fortune 500 and the U.S. government, as in the SolarWinds breach of 2019-20.

It’s not just attackers we have to worry about. Some companies use their access—benefiting from weak laws, complex software, and lax oversight—to mine and sell our data.

[…]

The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

[…]

Cryptographer David Chaum first applied the decoupling approach in security protocols for anonymity and digital cash in the 1980s, long before the advent of online banking or cryptocurrencies. Chaum asked: how can a bank or a network service provider provide a service to its users without spying on them while doing so?

Chaum’s ideas included sending Internet traffic through multiple servers run by different organizations and divvying up the data so that a breach of any one node reveals minimal information about users or usage. Although these ideas have been influential, they have found only niche uses, such as in the popular Tor browser.

Trust, but Don’t Identify

The decoupling principle can protect the privacy of data in motion, such as financial transactions and Web browsing patterns that currently are wide open to vendors, banks, websites, and Internet Service Providers (ISPs).

Illustration of a process.

STORYTK

1. Barath orders Bruce’s audiobook from Audible. 2. His bank does not know what he is buying, but it guarantees the payment. 3. A third party decrypts the order details but does not know who placed the order. 4. Audible delivers the audiobook and receives the payment.

DECOUPLED E-COMMERCE: By inserting an independent verifier between the bank and the seller and by blinding the buyer’s identity from the verifier, the seller and the verifier cannot identify the buyer, and the bank cannot identify the product purchased. But all parties can trust that the signed payment is valid.

Illustration of a process

STORYTK

1. Bruce’s browser sends a doubly encrypted request for the IP address of sigcomm.org. 2. A third-party proxy server decrypts one layer and passes on the request, replacing Bruce’s identity with an anonymous ID. 3. An Oblivious DNS server decrypts the request, looks up the IP address, and sends it back in an encrypted reply. 4. The proxy server forwards the encrypted reply to Bruce’s browser. 5. Bruce’s browser decrypts the response to obtain the IP address of sigcomm.org.

DECOUPLED WEB BROWSING: ISPs can track which websites their users visit because requests to the Domain Name System (DNS), which converts domain names to IP addresses, are unencrypted. A new protocol called Oblivious DNS can protect users’ browsing requests from third parties. Each name-resolution request is encrypted twice and then sent to an intermediary (a “proxy”) that strips out the user’s IP address and decrypts the outer layer before passing the request to a domain name server, which then decrypts the actual request. Neither the ISP nor any other computer along the way can see what name is being queried. The Oblivious resolver has the key needed to decrypt the request but no information about who placed it. The resolver encrypts its reply so that only the user can read it.

Similar methods have been extended beyond DNS to multiparty-relay protocols that protect the privacy of all Web browsing through free services such as Tor and subscription services such as INVISV Relay and Apple’s iCloud Private Relay.

[…]

Meetings that were once held in a private conference room are now happening in the cloud, and third parties like Zoom see it all: who, what, when, where. There’s no reason a videoconferencing company has to learn such sensitive information about every organization it provides services to. But that’s the way it works today, and we’ve all become used to it.

There are multiple threats to the security of that Zoom call. A Zoom employee could go rogue and snoop on calls. Zoom could spy on calls of other companies or harvest and sell user data to data brokers. It could use your personal data to train its AI models. And even if Zoom and all its employees are completely trustworthy, the risk of Zoom getting breached is omnipresent. Whatever Zoom can do with your data in motion, a hacker can do to that same data in a breach. Decoupling data in motion could address those threats.

[…]

Most storage and database providers started encrypting data on disk years ago, but that’s not enough to ensure security. In most cases, the data is decrypted every time it is read from disk. A hacker or malicious insider silently snooping at the cloud provider could thus intercept your data despite it having been encrypted.

Cloud-storage companies have at various times harvested user data for AI training or to sell targeted ads. Some hoard it and offer paid access back to us or just sell it wholesale to data brokers. Even the best corporate stewards of our data are getting into the advertising game, and the decade-old feudal model of security—where a single company provides users with hardware, software, and a variety of local and cloud services—is breaking down.

Decoupling can help us retain the benefits of cloud storage while keeping our data secure. As with data in motion, the risks begin with access the provider has to raw data (or that hackers gain in a breach). End-to-end encryption, with the end user holding the keys, ensures that the cloud provider can’t independently decrypt data from disk.

[…]

Modern protocols for decoupled data storage, like Tim Berners-Lee’s Solid, provide this sort of security. Solid is a protocol for distributed personal data stores, called pods. By giving users control over both where their pod is located and who has access to the data within it—at a fine-grained level—Solid ensures that data is under user control even if the hosting provider or app developer goes rogue or has a breach. In this model, users and organizations can manage their own risk as they see fit, sharing only the data necessary for each particular use.

[…]

the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

With TEEs in the cloud, the final piece of the decoupling puzzle drops into place. An organization can keep and share its data securely at rest, move it securely in motion, and decrypt and analyze it in a TEE such that the cloud provider doesn’t have access. Once the computation is done, the results can be reencrypted and shipped off to storage. CPU-based TEEs are now widely available among cloud providers, and soon GPU-based TEEs—useful for AI applications—will be common as well.

[…]

Decoupling also allows us to look at security more holistically. For example, we can dispense with the distinction between security and privacy. Historically, privacy meant freedom from observation, usually for an individual person. Security, on the other hand, was about keeping an organization’s data safe and preventing an adversary from doing bad things to its resources or infrastructure.

There are still rare instances where security and privacy differ, but organizations and individuals are now using the same cloud services and facing similar threats. Security and privacy have converged, and we can usefully think about them together as we apply decoupling.

[…]

Decoupling isn’t a panacea. There will always be new, clever side-channel attacks. And most decoupling solutions assume a degree of noncollusion between independent companies or organizations. But that noncollusion is already an implicit assumption today: we trust that Google and Advanced Micro Devices will not conspire to break the security of the TEEs they deploy, for example, because the reputational harm from being found out would hurt their businesses. The primary risk, real but also often overstated, is if a government secretly compels companies to introduce backdoors into their systems. In an age of international cloud services, this would be hard to conceal and would cause irreparable harm.

[…]

Imagine that individuals and organizations held their credit data in cloud-hosted repositories that enable fine-grained encryption and access control. Applying for a loan could then take advantage of all three modes of decoupling. First, the user could employ Solid or a similar technology to grant access to Equifax and a bank only for the specific loan application. Second, the communications to and from secure enclaves in the cloud could be decoupled and secured to conceal who is requesting the credit analysis and the identity of the loan applicant. Third, computations by a credit-analysis algorithm could run in a TEE. The user could use an external auditor to confirm that only that specific algorithm was run. The credit-scoring algorithm might be proprietary, and that’s fine: in this approach, Equifax doesn’t need to reveal it to the user, just as the user doesn’t need to give Equifax access to unencrypted data outside of a TEE.

Building this is easier said than done, of course. But it’s practical today, using widely available technologies. The barriers are more economic than technical.

[…]

One of the challenges of trying to regulate tech is that industry incumbents push for tech-only approaches that simply whitewash bad practices. For example, when Facebook rolls out “privacy-enhancing” advertising, but still collects every move you make, has control of all the data you put on its platform, and is embedded in nearly every website you visit, that privacy technology does little to protect you. We need to think beyond minor, superficial fixes.

Decoupling might seem strange at first, but it’s built on familiar ideas. Computing’s main tricks are abstraction and indirection. Abstraction involves hiding the messy details of something inside a nice clean package: when you use Gmail, you don’t have to think about the hundreds of thousands of Google servers that have stored or processed your data. Indirection involves creating a new intermediary between two existing things, such as when Uber wedged its app between passengers and drivers.

The cloud as we know it today is born of three decades of increasing abstraction and indirection. Communications, storage, and compute infrastructure for a typical company were once run on a server in a closet. Next, companies no longer had to maintain a server closet, but could rent a spot in a dedicated colocation facility. After that, colocation facilities decided to rent out their own servers to companies. Then, with virtualization software, companies could get the illusion of having a server while actually just running a virtual machine on a server they rented somewhere. Finally, with serverless computing and most types of software as a service, we no longer know or care where or how software runs in the cloud, just that it does what we need it to do.

[…]

We’re now at a turning point where we can add further abstraction and indirection to improve security, turning the tables on the cloud providers and taking back control as organizations and individuals while still benefiting from what they do.

The needed protocols and infrastructure exist, and there are services that can do all of this already, without sacrificing the performance, quality, and usability of conventional cloud services.

But we cannot just rely on industry to take care of this. Self-regulation is a time-honored stall tactic: a piecemeal or superficial tech-only approach would likely undermine the will of the public and regulators to take action. We need a belt-and-suspenders strategy, with government policy that mandates decoupling-based best practices, a tech sector that implements this architecture, and public awareness of both the need for and the benefits of this better way forward.

Source: Essays: Decoupling for Security – Schneier on Security

European digital identity: Council and Parliament reach a provisional agreement on eID

[…]

Under the new law, member states will offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving licence, diplomas, bank account). Citizens will be able to prove their identity and share electronic documents from their digital wallets with a click of a button on their mobile phone.

The new European digital identity wallets will enable all Europeans to access online services with their national digital identification, which will be recognised throughout Europe, without having to use private identification methods or unnecessarily sharing personal data. User control ensures that only information that needs to be shared will be shared.

Concluding the initial provisional agreement

Since the initial provisional agreement on some of the main elements of the legislative proposal at the end of June this year, a thorough series of technical meetings followed in order to complete a text that allowed the finalisation of the file in full. Some relevant aspects agreed by the co-legislators today are:

  • the e-signatures: the wallet will be free to use for natural persons by default, but member states may provide for measures to ensure that the free-of-charge use is limited to non-professional purposes
  • the wallet’s business model: the issuance, use and revocation will be free of charge for all natural persons
  • the validation of electronic attestation of attributes: member states shall provide free-of-charge validation mechanisms only to verify the authenticity and validity of the wallet and of the relying parties’ identity
  • the code for the wallets: the application software components will be open source, but member states are granted necessary leeway so that, for justified reasons, specific components other than those installed on user devices may not be disclosed
  • consistency between the wallet as an eID means and the underpinning scheme under which it is issued has been ensured

Finally, the revised law clarifies the scope of the qualified web authentication certificates (QWACs), which ensures that users can verify who is behind a website, while preserving the current well-established industry security rules and standards.

Next steps

Technical work will continue to complete the legal text in accordance with the provisional agreement. When finalised, the text will be submitted to the member states’ representatives (Coreper) for endorsement. Subject to a legal/linguistic review, the revised regulation will then need to be formally adopted by the Parliament and the Council before it can be published in the EU’s Official Journal and enter into force.

[…]

Source: European digital identity: Council and Parliament reach a provisional agreement on eID – Consilium

What does that free vs ad supported Facebook / Instagram warning mean, why is it there?

facebook ads choice

In the EU, Meta has given you a warning saying that you need to choose for an expensive ad free version or continue using targetted adverts. Strangely, considering Meta makes it’s profits by selling your information, you don’t get the option to be paid a cut of the profits they gain by selling your information. Even more strangely, not many people are covering it. Below is a pretty good writeup of the situation, but what is not clear is whether by agreeing to the free version, things continue as they are, or are you signing up for additional invasions into your privacy, such as sending your information to servers into the USA.

Even though it’s a seriously and strangely underreported phenomenon, people are leaving Meta for fear (justly or unjustly) of further intrusions into their privacy by the slurping behemoth.

Why is Meta launching an ad-free plan for Instagram and Facebook?

After receiving major backlash from the European Union in January 2023, resulting in a €377 million fine for the tech giant, Meta has since adapted their applications to suit EU regulations. These major adaptions have all led to the recent launch of their ad-free subscription service.

This most recent announcement comes to keep in line with the European Union’s Digital Marketers Act legislation. The legislation requires companies to give users the option to give consent before being tracked for advertising reasons, something Meta previously wasn’t doing.

As a way of complying with this rule while also sustaining its ad-supported business model, Meta is now releasing an ad-free subscription service for users who don’t want targeted ads showing up on their Instagram and Facebook feeds while also putting some more cash in the company’s pocket.

How much will the ad-free plan cost on Instagram and Facebook?

facebook-on-laptop
Austin Distel on Unsplash

The price depends on where you purchase the subscription. If you purchase the ad-free plan from Meta for your desktop, then the plan will cost €9.99/month. If you purchase on your Android or IOS device, the plan will cost €12.99/month. Presumably, this is because Apple and Google charge fees, and Meta is passing those fees along to the user instead of taking a hit on its profit.

If I buy the plan on desktop, will the subscription carry over to my phone?

Yes! It’s confusing at first, but no matter where you sign up for your subscription, it will automatically link to all your meta accounts, allowing you to view ad-free content on every device. Essentially, if you have access to a desktop and are interested in signing up for the ad-free plan, you’re better off signing up there, as you’ll save some money.

When will the ad-free plan be available to Instagram and Facebook users?

The subscription will be available for users in November 2023. Meta didn’t announce a specific date.

“In November, we will be offering people who use Facebook or Instagram and reside in these regions the choice to continue using these personalised services for free with ads, or subscribe to stop seeing ads.”

Can I still use Instagram and Facebook without subscribing to Meta’s ad-free plan?

Meta’s statement said that it believes “in an ad-supported internet, which gives people access to personalized products and services regardless of their economic status.” Staying true to its beliefs, Meta will still allow users to use its services for free with ads.

The Onyx Boox Tab Mini C running the Instagram app.

However, it’s important to note that Meta mentioned in its statement, “Beginning March 1, 2024, an additional fee of €6/month on the web and €8/month on iOS and Android will apply for each additional account listed in a user’s Account Center.” So, for now, the subscription will cover accounts on all platforms, but the cost will rise in the future for users with more than one account

Which countries will get the new. ad-free subscription option?

The below countries can access Meta’s new subscription:

Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lichtenstein, Lithuania, Luxembourg, Malta, Norway, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Switzerland and Sweden.

Will Meta launch this ad-free plan outside the EU and Switzerland?

It’s unknown at the moment whether Meta plans to expand this service into any other regions. Currently, the only regions able to subscribe to an ad-free plan are those listed above, but if it’s successful in those countries, it’s possible that Meta could roll it out in other regions.

What’s the difference between Meta Verified and this ad-free plan?

Launched in early 2023, Meta Verified allows Facebook and Instagram users to pay for a blue tick mark next to their name. Yes, the same tick mark most celebrities with major followings typically have. This subscription service was launched as a way for users to protect their accounts and promote their businesses. Meta Verified costs $14.99/month (€14/month). It gives users the blue tick mark and provides extra account support and protection from impersonators.

How to apply to be verified on Instagram image 1
Unsplash/Pocket-lint

While Meta Verified offers several unique account privacy features for users, it doesn’t offer an ad-free subscription. Currently, those subscribed to Meta Verified must also pay for an ad-free account if they live in one of the supported countries.

How can I sign up for Meta’s ad-free plan for Instagram and Facebook?

Users can sign up for the ad-free subscription via their Facebook or Instagram accounts. Here’s what you need to sign up:

  1. Go to account settings on Facebook or Instagram.
  2. Click subscribe on the ad-free plan under the subscriptions tab (once it’s available).

If I choose not to subscribe, will I receive more ads than I do now?

Meta says that nothing will change about your current account if you choose to keep your account as is, meaning you don’t subscribe to the ad-free plan. In other words, you’ll see exactly the same amount of ads you’ve always seen.

How will this affect other social media platforms?

Paid subscriptions seem to be the trend among many social media platforms in the past couple of years. Snapchat hopped onto the trend early in the Summer of 2022 when they released Snapchat+, which allows premium users to pay $4/month to see where they rank on their friends’ best friends list, boost their stories, pin friends as their top best friends, and further customize their settings.

More notably, Twitter, famously bought by Elon Musk, who now rebranded the platform to “X,” released three different tiers of subscriptions meant to improve a user’s experience. The tiers include Basic, Premium, and Premium Plus. X’s latest release, the Premium+ tier, allows users to pay $16/month for an ad-free experience and the ability to edit or undo their posts.

TikTok 1
Pocket-lint

Other major apps, such as TikTok, have yet to announce any ad-free subscription plans, although it wouldn’t be shocking if they followed suit.

For Meta’s part, it claims to want its websites to remain a free ad-based revenue domain, but we’ll see how long that lasts, especially if its first two subscription offerings succeed.

This is the spin Facebook itself gives on the story: Facebook and Instagram to Offer Subscription for No Ads in Europe

What else is noteworthy, is that this comes as Youtube is installing spyware onto your computer to figure out if you are running an adblocker – also something not receiving enough attention.

See also: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

and YouTube cares less for your privacy than its revenues

Time to switch to alternatives!