General Motors Quits Sharing Driving Behavior With Data Brokers – Now sells it directly to insurance companies?

General Motors said Friday that it had stopped sharing details about how people drove its cars with two data brokers that created risk profiles for the insurance industry.

The decision followed a New York Times report this month that G.M. had, for years, been sharing data about drivers’ mileage, braking, acceleration and speed with the insurance industry. The drivers were enrolled — some unknowingly, they said — in OnStar Smart Driver, a feature in G.M.’s internet-connected cars that collected data about how the car had been driven and promised feedback and digital badges for good driving.

Some drivers said their insurance rates had increased as a result of the captured data, which G.M. shared with two brokers, LexisNexis Risk Solutions and Verisk. The firms then sold the data to insurance companies.

Since Wednesday, “OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk,” a G.M. spokeswoman, Malorie Lucich, said in an emailed statement. “Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies.”

Romeo Chicco, a Florida man whose insurance rates nearly doubled after his Cadillac collected his driving data, filed a complaint seeking class-action status against G.M., OnStar and LexisNexis this month.

An internal document, reviewed by The Times, showed that as of 2022, more than eight million vehicles were included in Smart Driver. An employee familiar with the program said the company’s annual revenue from Smart Driver was in the low millions of dollars.

Source: General Motors Quits Sharing Driving Behavior With Data Brokers – The New York Times

No mention of who it is now selling the data to.

VPN Demand Surges 234.8% After Adult Site Restriction on Texas-Based Users

VPN demand in Texas skyrocketed by 234.8% on March 15, 2024, after state authorities enacted a law requiring adult sites to verify users’ ages before granting them access to the websites’ content.

Texas’ age verification law was passed in June 2023 and was set to take effect in September of the same year. However, a day before its implementation, a US district judge temporarily blocked enforcement after a lawsuit filed by the Free Speech Coalition (FSC) deemed the policy unconstitutional per the First Amendment.

On March 14, 2024, the US Court of Appeals for the 5th Circuit decreed that Texas could proceed with the law’s enactment.

As a sign of protest, Pornhub, the most visited adult site in the US, blocked IP addresses from Texas — the eighth state to suffer such a ban after their respective governments enforced similar restrictions on adult sites.

[…]

Following the law’s enactment, users in Texas seem to be scrambling for means to access the affected adult sites. vpnMentor’s research team analyzed user demand data and found a 234.8% increase in VPN demand in the state.

The graph below shows the VPN demand in Texas from March 1 to March 16.

Past VPN Demand Growths from Adult Site Restrictions

Pornhub has previously blocked IP addresses from Louisiana, Mississippi, Arkansas, Utah, Virginia, North Carolina, and Montana — all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state. That same year, the passing of adult-site-related age restriction laws in Louisiana and Mississippi led to a 200% and 72% surge in VPN interest, respectively.

Source: VPN Demand Surges Post Adult Site Restriction on Texas-Based Users

Pornhub disables website in Texas after AG sues for not verifying users’ ages

Pornhub has disabled its site in Texas to object to a state law that requires the company to verify the age of users to prevent minors from accessing the site.

Texas residents who visit the site are met with a message from the company that criticizes the state’s elected officials who are requiring them to track the age of users.

The company said the newly passed law impinges on “the rights of adults to access protected speech” and fails to pass strict scrutiny by “employing the least effective and yet also most restrictive means of accomplishing Texas’s stated purpose of allegedly protecting minors.”

Pornhub said safety and compliance are “at the forefront” of the company’s mission, but having users provide identification every time they want to access the site is “not an effective solution for protecting users online.” The adult content website argues the restrictions instead will put minors and users’ privacy at risk.

[…]

The announcement from Pornhub follows the news that Texas Attorney General Ken Paxton (R) was suing Aylo, the pornography giant that owns Pornhub, for not following the newly enacted age verification law.

Paxton’s lawsuit is looking to have Aylo pay up to $1,600,000, from mid-September of last year to the date of the filing of the lawsuit and an additional $10,000 each day since filing.

[…]

Paxton released a statement on March 8, calling the ruling an “important victory.” The court ruled that the age verification requirement does not violate the First Amendment, Paxton said, saying he won in the fight against Pornhub and other pornography companies.

The state Legislature passed the age verification law last year, requiring companies that distribute sexual material that could be harmful to minors to confirm users to the platform are older than 18 years. The law asks users to provide government-issued identification or public or private data to verify they are of age to access the site.

 

Source: Pornhub disables website in Texas after AG sues for not verifying users’ ages | The Hill

Age verification is not only easily bypassed, but also extremely sensitive due to the nature of the documents you need to upload to the verification agency. Big centralised databases get hacked all the time and this one would be a massive target, also leaving people in it potentially open to blackmail, as they would be linked to a porn site – which for some reason Americans find problematic.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies

car with eye in sky

Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident. So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor. LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act. What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car. On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month. “It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.” In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in apatent application (PDF) that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.

Sometimes this is happening with a driver’s awareness and consent. Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance — where rates are set based on monitoring of their driving habits — it’s easy to collect that data wirelessly from their cars. But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis. Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read. Especially troubling is that some drivers with vehicles made by G.M. say they were tracked even when they did not turn on the feature — called OnStar Smart Driver — and that their insurance rates went up as a result.

European Commission broke data protection law with Microsoft Office 365 – duh

The European Commission has been reprimanded for infringing data protection regulations when using Microsoft 365.

The rebuke came from the European Data Protection Supervisor (EDPS) and is the culmination of an investigation that kicked off in May 2021, following the Schrems II judgement.

According to the EDPS, the EC infringed several data protection regulations, including rules around transferring personal data outside the EU / European Economic Area (EEA.)

According to the organization, “In particular, the Commission has failed to provide appropriate safeguards to ensure that personal data transferred outside the EU/EEA are afforded an essentially equivalent level of protection as guaranteed in the EU/EEA.

“Furthermore, in its contract with Microsoft, the Commission did not sufficiently specify what types of personal data are to be collected and for which explicit and specified purposes when using Microsoft 365.”

While the concerns are more about EU institutions and transparency, they should also serve as notice to any company doing business in the EU / EEA to take a very close look at how it has configured Microsoft 365 regarding the EU Data Protection Regulations.

[…]

Source: European Commission broke data protection law with Microsoft • The Register

Who knew? An American Company running an American cloud product on American Servers and the EU was putting it’s data on it. Who would have thought that might end up in America?!

Biden executive order aims to stop a few countries from buying Americans’ personal data – a watered down EU GDPR

[…]

President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly.

[…]

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

Source: Biden executive order aims to stop Russia and China from buying Americans’ personal data

Too little, not enough, way way way too late.

Investigators seek push notification metadata in 130 cases – this is scarier than you think

More than 130 petitions seeking access to push notification metadata have been filed in US courts, according to a Washington Post investigation – a finding that underscores the lack of privacy protection available to users of mobile devices.

The poor state of mobile device privacy has provided US state and federal investigators with valuable information in criminal investigations involving suspected terrorism, child sexual abuse, drugs, and fraud – even when suspects have tried to hide their communications using encrypted messaging.

But it also means that prosecutors in states that outlaw abortion could demand such information to geolocate women at reproductive healthcare facilities. Foreign governments may also demand push notification metadata from Apple, Google, third-party push services, or app developers for their own criminal investigations or political persecutions. Concern has already surfaced that they may have done so for several years.

In December 2023, US senator Ron Wyden (D-OR) sent a letter to the Justice Department about a tip received by his office in 2022 indicating that foreign government agencies were demanding smartphone push notification records from Google and Apple.

[…]

Apple and Google operate push notification services that relay communication from third-party servers to specific applications on iOS and Android phones. App developers can encrypt these messages when they’re stored (in transit they’re protected by TLS) but the associated metadata – the app receiving the notification, the time stamp, and network details – is not encrypted.

[…]

push notification metadata is extremely valuable to marketing organizations, to app distributors like Apple and Google, and also to government organizations and law enforcement agencies.

“In 2022, one of the largest push notification companies in the world, Pushwoosh, was found to secretly be a Russian company that deceived both the CDC and US Army into installing their technology into specific government apps,” said Edwards.

“These types of scandals are the tip of the iceberg for how push notifications can be abused, and why countless serious organizations focus on them as a source of intelligence,” he explained.

“If you sign up for push notifications, and travel around to unique locations, as the messages hit your device, specific details about your device, IP address, and location are shared with app stores like Apple and Google,” Edwards added. “And the push notification companies who support these services typically have additional details about users, including email addresses and user IDs.”

Edwards continued that other identifiers may further deprive people of privacy, noting that advertising identifiers can be connected to push notification identifiers. He pointed to Pushwoosh as an example of a firm that built its push notification ID using the iOS advertising ID.

“The simplest way to think about push notifications,” he said, is “they are just like little pre-scheduled messages from marketing vendors, sent via mobile apps. The data that is required to ‘turn on any push notification service’ is quite invasive and can unexpectedly reveal/track your location/store your movement with a third-party marketing company or one of the app stores, which is merely a court order or subpoena away from potentially exposing those personal details.”

Source: Investigators seek push notification metadata in 130 cases • The Register

Also see: Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

Scammers Are Now Scanning Faces To Defeat Age verification Biometric Security Measures

For quite some time now we’ve been pointing out the many harms of age verification technologies, and how they’re a disaster for privacy. In particular, we’ve noted that if you have someone collecting biometric information on people, that data itself becomes a massive risk since it will be targeted.

And, remember, a year and a half ago, the Age Verification Providers Association posted a comment right here on Techdirt saying not to worry about the privacy risks, as all they wanted to do was scan everyone’s face to visit a website (perhaps making you turn to the left or right to prove “liveness”).

Anyway, now a report has come out that some Chinese hackers have been tricking people into having their faces scanned, so that the hackers can then use the resulting scan to access accounts.

Attesting to this, cybersecurity company Group-IB has discovered the first banking trojan that steals people’s faces. Unsuspecting users are tricked into giving up personal IDs and phone numbers and are prompted to perform face scans. These images are then swapped out with AI-generated deepfakes that can easily bypass security checkpoints

The method — developed by a Chinese-based hacking family — is believed to have been used in Vietnam earlier this month, when attackers lured a victim into a malicious app, tricked them into face scanning, then withdrew the equivalent of $40,000 from their bank account. 

Cool cool, nothing could possibly go wrong in now requiring more and more people to normalize the idea of scanning your face to access a website. Nothing at all.

And no, this isn’t about age verification, but still, the normalization of facial scanning is a problem, as it’s such an obvious target for scammers and hackers.

Source: As Predicted: Scammers Are Now Scanning Faces To Defeat Biometric Security Measures | Techdirt

Meta will start collecting much more “anonymized” data about Quest headset usage

Meta will soon begin “collecting anonymized data” from users of its Quest headsets, a move that could see the company aggregating information about hand, body, and eye tracking; camera information; “information about your physical environment”; and information about “the virtual reality events you attend.”

In an email sent to Quest users Monday, Meta notes that it currently collects “the data required for your Meta Quest to work properly.” Starting with the next software update, though, the company will begin collecting and aggregating “anonymized data about… device usage” from Quest users. That anonymized data will be used “for things like building better experiences and improving Meta Quest products for everyone,” the company writes.

A linked help page on data sharing clarifies that Meta can collect anonymized versions of any of the usage data included in the “Supplemental Meta Platforms Technologies Privacy Policy,” which was last updated in October. That document lists a host of personal information that Meta can collect from your headset, including:

  • “Your audio data, when your microphone preferences are enabled, to animate your avatar’s lip and face movement”
  • “Certain data” about hand, body, and eye tracking, “such as tracking quality and the amount of time it takes to detect your hands and body”
  • Fitness-related information such as the “number of calories you burned, how long you’ve been physically active, [and] your fitness goals and achievements”
  • “Information about your physical environment and its dimensions” such as “the size of walls, surfaces, and objects in your room and the distances between them and your headset”
  • “Voice interactions” used when making audio commands or dictations, including audio recordings and transcripts that might include “any background sound that happens when you use those services” (these recordings and transcriptions are deleted “immediately” in most cases, Meta writes)
  • Information about “your activity in virtual reality,” including “the virtual reality events you attend”

The anonymized collection data is used in part to “analyz[e] device performance and reliability” to “improve the hardware and software that powers your experiences with Meta VR Products.”

What does Meta know about what you're doing in VR?
Enlarge / What does Meta know about what you’re doing in VR?
Meta

Meta’s help page also lists a small subset of “additional data” that headset users can opt out of sharing with Meta. But there’s no indication that Quest users can opt out of the new anonymized data collection policies entirely.

These policies only seem to apply to users who make use of a Meta account to access their Quest headsets, and those users are also subject to Meta’s wider data-collection policies. Those who use a legacy Oculus account are subject to a separate privacy policy that describes a similar but more limited set of data-collection practices.

Not a new concern

Meta is clear that the data it collects “is anonymized so it does not identify you.” But here at Ars, we’ve long covered situations where data that was supposed to be “anonymous” was linked back to personally identifiable information about the people who generated it. The FTC is currently pursuing a case against Kochava, a data broker that links de-anonymized geolocation data to a “staggering amount of sensitive and identifying information,” according to the regulator.

Concerns about VR headset data collection dates back to when Meta’s virtual reality division was still named Oculus. Shortly after the launch of the Oculus Rift in 2016, Senator Al Franken (D-Minn.) sent an open letter to the company seeking information on “the extent to which Oculus may be collecting Americans’ personal information, including sensitive location data, and sharing that information with third parties.”

In 2020, the company then called Facebook faced controversy for requiring Oculus users to migrate to a Facebook account to continue using their headsets. That led to a temporary pause of Oculus headset sales in Germany before Meta finally offered the option to decouple its VR accounts from its social media accounts in 2022.

Source: Meta will start collecting “anonymized” data about Quest headset usage | Ars Technica

Canadian college M&M Vending machines secretly scanning faces – revealed by error message

[…]

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).
Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

[…]

Source: Vending machine error reveals secret face image database of college students | Ars Technica

European human rights court says backdooring encrypted comms is against human rights

a picture of an eye staring at your from your mobile phone

The European Court of Human Rights (ECHR) has ruled that laws requiring crippled encryption and extensive data retention violate the European Convention on Human Rights – a decision that may derail European data surveillance legislation known as Chat Control.

The Court issued a decision on Tuesday stating that “the contested legislation providing for the retention of all internet communications of all users, the security services’ direct access to the data stored without adequate safeguards against abuse and the requirement to decrypt encrypted communications, as applied to end-to-end encrypted communications, cannot be regarded as necessary in a democratic society.”

The “contested legislation” mentioned above refers to a legal challenge that started in 2017 after a demand from Russia’s Federal Security Service (FSB) that messaging service Telegram provide technical information to assist the decryption of a user’s communication. The plaintiff, Anton Valeryevich Podchasov, challenged the order in Russia but his claim was dismissed.

In 2019, Podchasov brought the matter to the ECHR. Russia joined the Council of Europe – an international human rights organization – in 1996 and was a member until it withdrew in March 2022 following its illegal invasion of Ukraine. Because the 2019 case predates Russia’s withdrawal, the ECHR continued to consider the matter.

The Court concluded that the Russian law requiring Telegram “to decrypt end-to-end encrypted communications risks amounting to a requirement that providers of such services weaken the encryption mechanism for all users.” As such, the Court considers that requirement disproportionate to legitimate law enforcement goals.

While the ECHR decision is unlikely to have any effect within Russia, it matters to countries in Europe that are contemplating similar decryption laws – such as Chat Control and the UK government’s Online Safety Act.

Chat Control is shorthand for European data surveillance legislation that would require internet service providers to scan digital communications for illegal content – specifically child sexual abuse material and potentially terrorism-related information. Doing so would necessarily entail weakening the encryption that keeps communication private.

Efforts to develop workable rules have been underway for several years and continue to this day, despite widespread condemnation from academics, privacy-oriented orgs, and civil society groups.

Patrick Breyer, a member of the European parliament for the Pirate Party, hailed the ruling for demonstrating that Chat Control is incompatible with EU law.

“With this outstanding landmark judgment, the ‘client-side scanning’ surveillance on all smartphones proposed by the EU Commission in its chat control bill is clearly illegal,” said Breyer.

“It would destroy the protection of everyone instead of investigating suspects. EU governments will now have no choice but to remove the destruction of secure encryption from their position on this proposal – as well as the indiscriminate surveillance of private communications of the entire population!” ®

Source: European human rights court says no to weakened encryption • The Register

23andMe Thinks ‘Mining’ Your DNA Data Is Its Last Hope

23andMe is in a death spiral. Almost everyone who wants a DNA test already bought one, a nightmare data breach ruined the company’s reputation, and 23andMe’s stock is so close to worthless it might get kicked off the Nasdaq. CEO Anne Wojcicki is on a crisis tour, promising investors the company isn’t going out of business because she has a new plan: 23andMe is going to double down on mining your DNA data and selling it to pharmaceutical companies.

“We now have the ability to mine the dataset for ourselves, as well as to partner with other groups,” Wojcicki said in an interview with Wired. “It’s a real resource that we could apply to a number of different organizations for their own drug discovery.”

That’s been part of the plan since day one, but now it looks like it’s going to happen on a much larger scale. 23andMe has always coerced its customers into giving the company consent to share their DNA for “research,” a friendlier way of saying “giving it to pharmaceutical companies.” The company enjoyed an exclusive partnership with pharmaceutical giant GlaxoSmithKline, but apparently the drug maker already sucked the value out of your DNA, and that deal is running out. Now, 23andMe is looking for new companies who want to take a look at your genes.

[…]

the most exciting opportunity for “improvements” is that 23andMe and the pharmaceutical industry get to develop new drugs. There’s a tinge of irony here. Any discoveries that 23andMe makes come from studying DNA samples that you paid the company to collect.

[…]

The problem with 23andMe’s consumer-facing business is the company sells a product you only need once in a lifetime. Worse, the appeal of a DNA test for most people is the novelty of ancestry results, but if your brother already paid for a test, you already know the answers.

[…]

it’s spent years trying to brand itself as a healthcare service, and not just a $79 permission slip to tell people you’re Irish. In fact, the company thinks you should buy yourself a recurring annual subscription to something called 23andMe+ Total Health. It only costs $1,188 a year.

[…]

The secret is you just can’t learn a ton about your health from genetic screenings, aside from tests for specific diseases that doctors rarely order unless you have a family history.

[…]

What do you get with these subscriptions? It’s kind of vague. Depending on the package, they include a service that “helps you understand how genetics and lifestyle can impact your likelihood of developing certain conditions,” testing for rare genetic conditions, enhanced ancestry features, and more. Essentially, they’ll run genetic tests that you may not need. Then, they may or may not recommend that you talk to a doctor, because they can’t offer you actual medical care.

You could also skip the middleman and start with a normal conversation with your doctor, who will order genetic tests if you need them and bill your insurance company

[…]

If 23andMe company survives, the first step is going to be deals that give more companies access to look at your genetics than ever before. But if 23andMe goes out of business, it’ll get purchased or sold off for parts, which means other companies will get a look at your data anyway.

Source: 23andMe Admits ‘Mining’ Your DNA Data Is Its Last Hope

What this piece misses is the danger of whom the data is sold to – or if it is leaked (which it was). Insurance companies may refuse to insure you. Your DNA may be faked. Your unique and unchangeable identity – and those of your family – has been stolen.

The EU wants to criminalize AI-generated deepfakes and the non-consensual sending of intimate images

[…] the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and “cyber-flashing,” or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven’t criminalized them yet. “This is an urgent issue to address, given the exponential spread and dramatic impact of violence online,” it wrote in its announcement.

[…]

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift’s face urged EU officials to move forward with the proposal.

[…]

“The final law is also pending adoption in Council and European Parliament,” the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

Source: The EU wants to criminalize AI-generated porn images and deepfakes

The original article has a seriously misleading title, I guess for clickbait.

Hundreds of thousands of EU citizens ‘wrongly fined for driving in London Ulez’ in one of EUs largest privacy breaches

Hundreds of thousands of EU citizens were wrongly fined for driving in London’s Ulez clean air zone, according to European governments, in what has been described as “possibly one of the largest data breaches in EU history”.

The Guardian can reveal Transport for London (TfL) has been accused by five EU countries of illegally obtaining the names and addresses of their citizens in order to issue the fines, with more than 320,000 penalties, some totalling thousands of euros, sent out since 2021.

[…]

Since Brexit, the UK has been banned from automatic access to personal details of EU residents. Transport authorities in Belgium, Spain, Germany and the Netherlands have confirmed to the Guardian that driver data cannot be shared with the UK for enforcement of London’s ultra-low emission zone (Ulez), and claim registered keeper details were obtained illegally by agents acting for TfL’s contractor Euro Parking Collection.

In France, more than 100 drivers have launched a lawsuit claiming their details were obtained fraudulently, while Dutch lorry drivers are taking legal action against TfL over £6.5m of fines they claim were issued unlawfully.

According to the Belgian MP Michael Freilich, who has investigated the issue on behalf of his constituents, TfL is treating European drivers as a “cash cow” by using data obtained illegitimately to issue unjustifiable fines.

Many of the penalties have been issued to drivers who visited London in Ulez-compliant vehicles and were not aware they had to be registered with TfL’s collections agent Euro Parking at least 10 days before their visit.

Failure to register does not count as a contravention, according to Ulez rules, but some drivers have nonetheless received penalties of up to five-figure sums.

[…]

Some low-emission cars have been misclassed as heavy goods diesel vehicles and fined under the separate low-emission zone (Lez) scheme, which incurs penalties of up to £2,000 a day. Hundreds of drivers have complained that the fines arrived weeks after the early payment discount and appeals deadlines had passed.

One French driver was fined £25,000 for allegedly contravening Lez and Ulez rules, despite the fact his minibus was exempt.

[…]

EU countries say national laws allow the UK to access personal data only for criminal offences, not civil ones. Breaching Ulez rules is a civil offence, while more risky behaviour such as speeding or driving under the influence of drink or drugs can be a criminal offence. This raises the question of whether Euro Parking can legally carry out its contract with TfL.

Euro Parking was awarded a five-year contract by TfL in 2020 to recover debts from foreign drivers who had breached congestion or emission zone rules.

The company, which is paid according to its performance, is estimated to have earned between £5m and £10m. It has the option to renew for a further five years.

The firm is owned by the US transport technology group Verra Mobility, which is listed on the Nasdaq stock exchange and headed by the former Bank of America Merrill Lynch executive David Roberts. The company’s net revenue was $205m (£161m) in the second quarter of 2023.

In October, the Belgian government ordered a criminal investigation after a court bailiff was accused of illegally passing the details of 20,000 drivers to Euro Parking for Ulez enforcement. The bailiff was suspended in 2022 and TfL initially claimed that no Belgian data had been shared with Euro Parking since then. However, a freedom of information request by the Guardian found that more than 17,400 fines had been issued to Belgians in the intervening 19 months.

[…]

Campaigners accuse Euro Parking of circumventing data protection rules by using EU-based agents to request driver data without disclosing that it is for UK enforcement.

Last year, an investigation by the Dutch vehicle licensing authority RDW found that the personal details of 55,000 citizens had been obtained via an NCP in Italy. “The NCP informed us that the authorised users have used the data in an unlawful way and stopped their access,” a spokesperson said.

The German transport authority KBA claimed that an Italian NCP was used to obtain information from its database. “Euro Parking obtained the data through unlawful use of an EU directive to facilitate the cross-border exchange of information about traffic offences that endanger road safety,” a KBA spokesperson said. “The directive does not include breaches of environmental rules.”

Spain’s transport department told the Guardian that UK authorities were not allowed access to driver details for Ulez enforcement. Euro Parking has sent more than 25,600 fines to Spanish drivers since 2021.

In France, 102 drivers have launched a lawsuit claiming that their details were fraudulently obtained

[…]

Source: Hundreds of thousands of EU citizens ‘wrongly fined for driving in London Ulez’ | TfL | The Guardian

I guess Brexit has panned out economically much worse than we thought

iPhone Apps Secretly Harvest Data When They Send You Notifications, Researchers Find

iPhone apps including Facebook, LinkedIn, TikTok, and X/Twitter are skirting Apple’s privacy rules to collect user data through notifications, according to tests by security researchers at Mysk Inc., an app development company. Users sometimes close apps to stop them from collecting data in the background, but this technique gets around that protection. The data is unnecessary for processing notifications, the researchers said, and seems related to analytics, advertising, and tracking users across different apps and devices.

It’s par for the course that apps would find opportunities to sneak in more data collection, but “we were surprised to learn that this practice is widely used,” said Tommy Mysk, who conducted the tests along with Talal Haj Bakry. “Who would have known that an innocuous action as simple as dismissing a notification would trigger sending a lot of unique device information to remote servers? It is worrying when you think about the fact that developers can do that on-demand.”

These particular apps aren’t unusual bad actors. According to the researchers, it’s a widespread problem plaguing the iPhone ecosystem.

This isn’t the first time Mysk’s tests have uncovered data problems at Apple, which has spent untold millions convincing the world that “what happens on your iPhone, stays on your iPhone.” In October 2023, Mysk found that a lauded iPhone feature meant to protect details about your WiFi address isn’t as private as the company promises. In 2022, Apple was hit with over a dozen class action lawsuits after Gizmodo reported on Mysk’s finding that Apple collects data about its users even after they flip the switch on an iPhone privacy setting that promises to “disable the sharing of device analytics altogether.”

The data looks like information that’s used for “fingerprinting,” a technique companies use to identify you based on several seemingly innocuous details about your device. Fingerprinting circumvents privacy protections to track people and send them targeted ads

[…]

For example, the tests showed that when you interact with a notification from Facebook, the app collects IP addresses, the number of milliseconds since your phone was restarted, the amount of free memory space on your phone, and a host of other details. Combining data like these is enough to identify a person with a high level of accuracy. The other apps in the test collected similar information. LinkedIn, for example, uses notifications to gather which timezone you’re in, your display brightness, and what mobile carrier you’re using, as well as a host of other information that seems specifically related to advertising campaigns, Mysk said.

[…]

Apps can collect this kind of data about you when they’re open, but swiping an app closed is supposed to cut off the flow of data and stop an app from running whatsoever. However, it seems notifications provide a backdoor.

Apple provides special software to help your apps send notifications. For some notifications, the app might need to play a sound or download text, images, or other information. If the app is closed, the iPhone operating system lets the app wake up temporarily to contact company servers, send you the notification, and perform any other necessary business. The data harvesting Mysk spotted happened during this brief window.

[…]

Source: iPhone Apps Secretly Harvest Data When They Send You Notifications, Researchers Find

France fines Amazon $35 million over intrusive employee surveillance

France’s data privacy watchdog organization, the CNIL, has fined a logistics subsidiary of Amazon €32 million, or $35 million in US dollars, over the company’s use of an “overly intrusive” employee surveillance system. The CNIL says that the system employed by Amazon France Logistique “measured work interruptions with such accuracy, potentially requiring employees to justify every break or interruption.”

Of course, this system was forced on the company’s warehouse workers, as they seem to always get the short end of the Amazon stick. The CNIL says the surveillance software tracked the inactivity of employees via a mandatory barcode scanner that’s used to process orders. The system tracks idle time as interruptions in barcode scans, calling out employees for periods of downtime as low as one minute. The French organization ruled that the accuracy of this system was illegal, using Europe’s General Data Protection Regulation (GDPR) as a legal basis for the ruling.

To that end, this isn’t being classified as a labor case, but rather a data processing case regarding excessive monitoring. “As implemented, the processing is considered to be excessively intrusive,” the CNIL wrote, noting that Amazon uses this data to assess employee performance on a weekly basis. The organization also noted that Amazon held onto this data for all employees and temporary workers.

[…]

Source: France fines Amazon $35 million over ‘intrusive’ employee surveillance

Dutch phones can be easily tracked online: ‘Extreme security risk’

a map of the netherlands with cellphone towers

BNR received more than 80 gigabytes of location data from data traders: the coordinates of millions of telephones, often registered dozens of times a day.

The gigantic mountain of data also includes movements of people with functions in which safety plays an important role. A senior army officer could be followed as he drove from his home in the Randstad to various military locations in the country. A destination he often visited was the Frederikazerne, headquarters of the Military Intelligence and Security Service (MIVD). The soldier confirmed the authenticity of the data to BNR by telephone.

[…]

The data also reveals the home address of someone who often visits the Penitentiary in Vught, where terrorists and serious criminals are imprisoned. A spokesperson for the Judicial Institutions Agency (DJI) confirmed that the person, who according to the Land Registry lives at this address, had actually brought a mobile phone onto the premises with permission and stated that the matter was being investigated.

These are just examples, the list of potential targets is long: up to 1,200 phones in the dataset visited the office in Zoetermeer where the National Police, National Public Prosecutor’s Office and Europol are located. Up to 70 telephones are registered in the King’s residential palace, Huis ten Bosch. At the Volkel Air Base, a storage point for nuclear weapons, up to 370 telephones were counted. The National Police’s management says it is aware of the problem and is ‘looking internally to see what measures are appropriate to combat this’.

‘National security implications’

BNR had two experts inspect the dataset. “This is an extreme security risk, with possible implications for national security,” says Ralph Moonen, technical director of Secura. “It’s really shocking that this can happen like this,” says Sjoerd van der Meulen, cybersecurity specialist at DataExpert.

The technology used to track mobile phones is designed for use by advertisers, but is suitable for other purposes, says Paul Pols, former technical advisor to the Assessment Committee for the Use of Powers, which supervises the intelligence services. According to Pols, it is known that the MIVD and AIVD also purchase access to this type of data on the data market under the heading ‘open sources’. “What is striking about this case is that you can easily access large amounts of data from Dutch citizens,” said the cybersecurity expert.

For sale via an online marketplace in Berlin

That access was achieved through an online marketplace based in Berlin. On this platform, Datarade.ai, hundreds of companies offer personal data for sale. In addition to location data, medical information and credit scores are also available.

Following a tip from a data subject, BNR responded to an advertisement offering location data of Dutch users. A sales employee of the platform then contacted two medium-sized providers: Datastream Group from Florida in the US and Factori.ai from Singapore – both companies have fewer than 50 employees, according to their LinkedIn pages.

Datastream and Factori offer similar services: a subscription to the location data of mobile phones in the Netherlands is available for prices starting from $2,000 per month. Those who pay more can receive fresh data every 24 hours via the cloud, possibly even from all over the world.

[…]

Upon request, BNR was therefore sent a full month of historical data from Dutch telephones. This data was anonymized – it did not contain telephone numbers. Individual phones can be recognized by unique number combinations, a ‘mobile advertising ID’ used by Apple and Google to show individual users relevant advertisements within the limits of European privacy legislation.

Possibly four million Dutch victims of tracking

The precise origin of the data traded online is unclear. According to the providers, these come from apps that have received permission from users to use location data. This includes fitness or navigation apps that sell data. This is how the data ultimately ends up at Factori and Datastream. By combining data from multiple sources, gigantic files are created.

[…]

it is not difficult to recognize the owners of individual phones in the data. By linking sleeping places to data from public registers, such as the Land Registry, and workplaces to LinkedIn profiles, BNR was able to identify, in addition to the army officer, a project manager from Alphen aan den Rijn and an amateur football referee. The discovery that he had been digitally stalked for at least a month led to shocked reactions. ‘Bizarre’, and: ‘I immediately turned off ‘sharing location data’ on my phone’.

Trade is prohibited, but the government does not act

Datarade, the Berlin data marketplace, informed BNR in an email that traders on their platform are ‘fully liable’ for the data they offer. Illegal practices can be reported using an online form. The spokesperson for the German company leaves open the question of whether measures are being taken against the sale of location data.

[…]

Source (Google Translate): Dutch phones can be secretly tracked online: ‘Extreme security risk’ | BNR News Radio

Source (Dutch original): Nederlandse telefoons online stiekem te volgen: ‘Extreem veiligheidsrisico’

Drivers would prefer to buy a low-tech car than one that shares their data

According to a survey of 2,000 Americans conducted by Kaspersky in November and published this week, 72 percent of drivers are uncomfortable with automakers sharing their data with advertisers, insurance companies, subscription services, and other third-party outfits. Specifically, 37.3 percent of those polled are “very uncomfortable” with this data sharing, and 34.5 percent are “somewhat uncomfortable.”

However, only 28 percent of the total respondents say they have any idea what kind of data their car is collecting. Spoiler alert: It’s potentially all the data. An earlier Mozilla Foundation investigation, which assessed the privacy policies and practices of 25 automakers, gave every single one a failing grade.

In Moz’s September Privacy Not Included report, the org warned that car manufacturers aren’t only potentially collecting and selling things like location history, driving habits and in-car browser histories. Some connected cars may also track drivers’ sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, if that information becomes available.

Back to the Kaspersky survey: 87 percent said automakers should be required to delete their data upon request. Depending on where you live, and thus the privacy law you’re under, the manufacturers may be obligated to do so.

Oddly, while motorists are worried about their cars sharing their data with third parties, they don’t seem that concerned about their vehicles snooping on them in the first place.

Less than half (41.8 percent) of respondents said they are worried about their vehicle’s sensors, infotainment system, cameras, microphones, and other connected apps and services might be collecting their personal data. And 80 percent of respondents pair their phone with their car anyway, allowing data and details of activities to be exchanged between apps and the vehicle and potentially its manufacturer.

This echoes another survey published this week that found many drivers are willing to trade their personal data and privacy for driver personalization — things like seat, mirror, and entertainment preferences (43 percent) — and better insurance rates (67 percent).

The study also surveyed 2,000 American drivers to come up with these numbers and found that while most drivers (68 percent) don’t mind automakers collecting their personal data, only five percent believe this surveillance should be unrestricted, and 63 percent said it should be on an opt-in basis.

Perhaps it’s time for vehicle makers to take note

Source: Surveyed drivers prefer low-tech cars over data-sharing ones • The Register

Also, we want buttons back too please.

Google agrees to settle $5 billion lawsuit accusing it of tracking Incognito users

In 2020, Google was hit with a lawsuit that accused it of tracking Chrome users’ activities even when they were using Incognito mode. Now, after a failed attempt to get it dismissed, the company has agreed to settle the complaint that originally sought $5 billion in damages. According to Reuters and The Washington Post, neither side has made the details of the settlement public, but they’ve already agreed to the terms that they’re presenting to the court for approval in February.

When the plaintiffs filed the lawsuit, they said Google used tools like its Analytics product, apps and browser plug-ins to monitor users. They reasoned that by tracking someone on Incognito, the company was falsely making people believe that they could control the information that they were willing to share with it. At the time, a Google spokesperson said that while Incognito mode doesn’t save a user’s activity on their device, websites could still collect their information during the session.

The lawsuit’s plaintiffs presented internal emails that allegedly showed conversations between Google execs proving that the company monitored Incognito browser usage to sell ads and track web traffic. Their complaint accused Google of violating federal wire-tapping and California privacy laws and was asking up to $5,000 per affected user. They claimed that millions of people who’d been using Incognito since 2016 had likely been affected, which explains the massive damages they were seeking from the company. Google has likely agreed to settle for an amount lower than $5 billion, but it has yet to reveal details about the agreement and has yet to get back to Engadget with an official statement.

Source: Google agrees to settle $5 billion lawsuit accusing it of tracking Incognito users

Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It – because no US enforcement of any kind

Half a decade ago we documented how the U.S. wireless industry was caught over-collecting sensitive user location and vast troves of behavioral data, then selling access to that data to pretty much anybody with a couple of nickels to rub together. It resulted in no limit of abuse from everybody from stalkers to law enforcement — and even to people pretending to be law enforcement.

While the FCC purportedly moved to fine wireless companies for this behavior, the agency still hasn’t followed through. Despite the obvious ramifications of this kind of behavior during a post-Roe, authoritarian era.

Nearly a decade later, and it’s still a very obvious problem. The folks over at 404 Media have documented the case of a stalker who managed to game Verizon in order to obtain sensitive data about his target, including her address, location data, and call logs.

Her stalker posed as a police officer (badly) and, as usual, Verizon did virtually nothing to verify his identity:

“Glauner’s alleged scheme was not sophisticated in the slightest: he used a ProtonMail account, not a government email, to make the request, and used the name of a police officer that didn’t actually work for the police department he impersonated, according to court records. Despite those red flags, Verizon still provided the sensitive data to Glauner.”

In this case, the stalker found it relatively trivial to take advantage of Verizon Security Assistance and Court Order Compliance Team (or VSAT CCT), which verifies law enforcement requests for data. You’d think that after a decade of very ugly scandals on this front Verizon would have more meaningful safeguards in place, but you’d apparently be wrong.

Keep in mind: the FCC tried to impose some fairly basic privacy rules for broadband and wireless in 2016, but the telecom industry, in perfect lockstep with Republicans, killed those efforts before they could take effect, claiming they’d be too harmful for the super competitive and innovative (read: not competitive or innovative at all) U.S. broadband industry.

[…]

Source: Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It | Techdirt

UK Police to be able to run AI face recognition searches on all driving licence holders

The police will be able to run facial recognition searches on a database containing images of Britain’s 50 million driving licence holders under a law change being quietly introduced by the government.

Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match.

The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

[…]

The intention to allow the police or the National Crime Agency (NCA) to exploit the UK’s driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is “sneaking it under the radar”.

Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish “driver information regulations” to enable the searches, but he will need only to consult police bodies, according to the bill.

Critics claim facial recognition technology poses a threat to the rights of individuals to privacy, freedom of expression, non-discrimination and freedom of assembly and association.

Police are increasingly using live facial recognition, which compares a live camera feed of faces against a database of known identities, at major public events such as protests.

Prof Peter Fussey, a former independent reviewer of the Met’s use of facial recognition, said there was insufficient oversight of the use of facial recognition systems, with ministers worryingly silent over studies that showed the technology was prone to falsely identifying black and Asian faces.

[…]

The EU had considered making images on its member states’ driving licence records available on the Prüm crime fighting database. The proposal was dropped earlier this year as it was said to represent a disproportionate breach of privacy.

[…]

Carole McCartney, a professor of law and criminal justice at the University of Leicester, said the lack of consultation over the change in law raised questions over the legitimacy of the new powers.

She said: “This is another slide down the ‘slippery slope’ of allowing police access to whatever data they so choose – with little or no safeguards. Where is the public debate? How is this legitimate if the public don’t accept the use of the DVLA and passport databases in this way?”

The government scrapped the role of the commissioner for the retention and use of biometric material and the office of surveillance camera commissioner this summer, leaving ministers without an independent watchdog to scrutinise such legislative changes.

[…]

In 2020, the court of appeal ruled that South Wales police’s use of facial recognition technology had breached privacy rights, data protection laws and equality laws, given the risk the technology could have a race or gender bias.

The force has continued to use the technology. Live facial recognition is to be deployed to find a match of people attending Christmas markets this year against a watchlist.

Katy Watts, a lawyer at the civil rights advocacy group Liberty said: “This is a shortcut to widespread surveillance by the state and we should all be worried by it.”

Source: Police to be able to run face recognition searches on 50m driving licence holders | Facial recognition | The Guardian

Internet Architecture Board hits out at US, EU, UK client-side scanning (spying on everything on your phone and pc all the time) plans – to save (heard it before?) kids

[…]

Apple brought widespread attention to this so-called client-side scanning in August 2021 when it announced plans to examine photos on iPhones and iPads before they were synced to iCloud, as a safeguard against the distribution of child sexual abuse material (CSAM). Under that plan, if someone’s files were deemed to be CSAM, the user could lose their iCloud account and be reported to the cops.

As the name suggests, client-side scanning involves software on a phone or some other device automatically analyzing files for unlawful photos and other content, and then performing some action – such as flagging or removing the documents or reporting them to the authorities. At issue, primarily, is the loss of privacy from the identification process – how will that work with strong encryption, and do the files need to be shared with an outside service? Then there’s the reporting process – how accurate is it, is there any human intervention, and what happens if your gadget wrongly fingers you to the cops?

The iGiant’s plan was pilloried by advocacy organizations and by customers on technical and privacy grounds. Ultimately Apple abandoned the effort and went ahead with offering iCloud encryption – a level of privacy that prompted political pushback at other tech titans.

Proposals for client-side scanning … mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the ‘net glued together –thinks that’s a bad idea.

“A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression,” the IAB declared in a statement just before the weekend.

[…]

Specifically, the IAB cites Europe’s planned “Regulation laying down rules to prevent and combat child sexual abuse” (2022/0155(COD)), the UK Online Safety Act of 2023, and the US Earn-It Act, all of which contemplate regulatory regimes that have the potential to require the decryption of encrypted content in support of mandated surveillance.

The administrative body acknowledges the social harm done through the distribution of illegal content on the internet and the need to protect internet users. But it contends indiscriminate surveillance is not the answer.

The UK has already passed its Online Safety Act legislation, which authorizes telecom watchdog Ofcom to demand decryption of communications on grounds of child safety – though government officials have admitted that’s not technically feasible at the moment.

Europe, under fire for concealing those who have consulted on client-side scanning, and the US appears to be heading down a similar path.

For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring.

“The IAB opposes technologies that foster surveillance as they weaken the user’s expectations of private communication which decreases the trust in the internet as the core communication platform of today’s society,” the organization wrote. “Mandatory client-side scanning creates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.”

[…]

Source: Internet Architecture Board hits out at client-side scanning • The Register

As soon as they take away privacy to save kids, you know they will expand the remit as governments have always done. The fact is that mass surveillance is not particularly effective, even with AI, except in making people feel watched and thus altering their behaviour. This feeling of always being spied upon is much much worse for whole generations of children than the tiny amount of sexual predators that may actually be caught.

Google Will Stop Telling Law Enforcement Which Users Were Near a Crime, start saving location data on the mobile device instead of their servers. But not really though. And Why?

So most of the breathless reporting on Googles “Updates to Location History and new controls coming soon to Maps” is a bit like this below. However Google itself in “Manage your Location History” says that if you have location history on, it will also save it to it’s servers. There is no mention of encryption.

Alphabet Inc.’s Google is changing its Maps tool so that the company no longer has access to users’ individual location histories, cutting off its ability to respond to law enforcement warrants that ask for data on everyone who was in the vicinity of a crime.

Google is changing its Location History feature on Google Maps, according to a blog post this week. The feature, which Google says is off by default, helps users remember where they’ve been. The company said Thursday that for users who have it enabled, location data will soon be saved directly on users’ devices, blocking Google from being able to see it, and, by extension, blocking law enforcement from being able to demand that information from Google.

“Your location information is personal,” said Marlo McGriff, director of product for Google Maps, in the blog post. “We’re committed to keeping it safe, private and in your control.”

The change comes three months after a Bloomberg Businessweek investigation that found police across the US were increasingly using warrants to obtain location and search data from Google, even for nonviolent cases, and even for people who had nothing to do with the crime.

“It’s well past time,” said Jennifer Lynch, the general counsel at the Electronic Frontier Foundation, a San Francisco-based nonprofit that defends digital civil liberties. “We’ve been calling on Google to make these changes for years, and I think it’s fantastic for Google users, because it means that they can take advantage of features like location history without having to fear that the police will get access to all of that data.”

Google said it would roll out the changes gradually through the next year on its own Android and Apple Inc.’s iOS mobile operating systems, and that users will receive a notification when the update comes to their account. The company won’t be able to respond to new geofence warrants once the update is complete, including for people who choose to save encrypted backups of their location data to the cloud.“It’s a good win for privacy rights and sets an example,” said Jake Laperruque, deputy director of the security and surveillance project at the Center for Democracy & Technology. The move validates what litigators defending the privacy of location data have long argued in court: that just because a company might hold data as part of its business operations, that doesn’t mean users have agreed the company has a right to share it with a third party.

Lynch, the EFF lawyer, said that while Google deserves credit for the move, it’s long been the only tech company that that the EFF and other civil-liberties groups have seen responding to geofence warrants. “It’s great that Google is doing this, but at the same time, nobody else has been storing and collecting data in the same way as Google,” she said. Apple, which also has an app for Maps, has said it’s technically unable to supply the sort of location data police want.

There’s still another kind of warrant that privacy advocates are concerned about: so-called reverse keyword search warrants, where police can ask a technology company to provide data on the people who have searched for a given term. “Search queries can be extremely sensitive, even if you’re just searching for an address,” Lynch said.

Source: Google Will Stop Telling Law Enforcement Which Users Were Near a Crime

The question is – why now? The market for location data is estimated at around $12 billion (source: There’s a Murky Multibillion-Dollar Market for Your Phone’s Location Data) If you look a tiny little bit, you see the government asking for it all the time, and the fines issued for breaching location data privacy seem to be tiny compared to the money made by selling it.

Google will be changing the name of Location History as well to Timeline – and will be saving your location to it’s servers (see heading When Location History is on)

:

Manage your Location History

In the coming months, the Location History setting name will change to Timeline. If Location History is turned on for your account, you may find Timeline in your app and account settings.

Location History is a Google Account setting that creates Timeline, a personal map that helps you remember:

  • Places you go
  • Routes to destinations
  • Trips you take

It can also give you personalized experiences across Google based on where you go.

When Location History is on, even when Google apps aren’t in use, your precise device location is regularly saved to:

  • Your devices
  • Google servers

To make Google experiences helpful for everyone, we may use your data to:

  • Show information based on anonymized location data, such as:
    • Popular times
    • Environmental insights
  • Detect and prevent fraud and abuse.
  • Improve and develop Google services, such as ads products.
  • Help businesses determine if people visit their stores because of an ad, if you have Web & App Activity turned on.
    • We share only anonymous estimates, not personal data, with businesses.
    • This activity can include info about your location from your device’s general area and IP address.

Learn more about how Google uses location data.

Things to know about Location History:

  • Location History is off by default. We can only use it if you turn Location History on.
  • You can turn off Location History at any time in your Google Account’s Activity controls.
  • You can review and manage your Location History. You can:
    • Review places you’ve been in Google Maps Timeline.
    • Edit or delete your Location History anytime.

Important: Some of these steps work only on Android 8.0 and up. Learn how to check your Android version.

Turn Location History on or off

You can turn off Location History for your account at any time. If you use a work or school account, your administrator needs to make this setting available for you. If they do, you’ll be able to use Location History as any other user.

  1. Go to the “Location History” section of your Google Account.
  2. Choose whether your account or your devices can report Location History to Google.
    • Your account and all devices: At the top, turn Location History on or off.
    • Only a certain device: Under “This device” or “Devices on this account,” turn the device on or off.

When Location History is on

Google can estimate your location with:

  • Signals like Wi-Fi and mobile networks
  • GPS
  • Sensor information

Your device location may also periodically be used in the background. When Location History is on, even when Google apps aren’t in use, your device’s precise location is regularly saved to:

  • Your devices
  • Google servers

When you’re signed in with your Google Account, it saves the Location History of each device with the setting “Devices on this account” turned on You can find this setting in the Location History settings on your Google Account.

You can choose which devices provide their location data to Location History. Your settings don’t change for other location services on your device, such as:

When Location History is off

Your device doesn’t save its location to your Location History.

  • You may have previous Location History data in your account. You can manually delete it anytime.
  • Your settings don’t change for other location services on your device, such as:
  • If settings like Web and App Activity are on but you turn off Location History or delete location data from Location History, your Google Account may still save location data as part of your use of other Google sites, apps, and services. This activity can include info about your location from your device’s general area and IP address.

Delete Location History

You can manage and delete your Location History information with Google Maps Timeline. You can choose to delete all of your history, or only parts of it.

Important: When you delete Location History information from Timeline, you won’t be able to see it again.

Automatically delete your Location History

You can choose to automatically delete Location History that’s older than 3 months, 18 months, or 36 months.

What happens after you delete some or all Location History

If you delete some or all of your Location History, personalized experiences across Google may degrade or or be lost. For example, you may lose:

  • Recommendations based on places you visit
  • Real-time information about when best to leave for home or work to beat traffic

Important: If you have other settings like Web & App Activity turned on and you pause Location History or delete location data from Location History, you may still have location data saved in your Google Account as part of your use of other Google sites, apps, and services. For example, location data may be saved as part of activity on Search and Maps when your Web & App Activity setting is on, and included in your photos depending on your camera app settings. Web & App Activity can include info about your location from your device’s general area and IP address.

Learn about use & diagnostics for Location History

After you turn on Location History, your device may send diagnostic information to Google about what works or doesn’t work for Location History. Google processes any information it collects under Google’s privacy policy.

 

Learn more about other location settings

Source: Manage your Location History

 

 

US Law enforcement can obtain prescription records from pharmacy giants without a warrant

America’s eight largest pharmacy providers shared customers’ prescription records to law enforcement when faced with subpoena requests, The Washington Post reported Tuesday. The news arrives amid patients’ growing privacy concerns in the wake of the Supreme Court’s 2022 overturn of Roe v. Wade.

The new look into the legal workarounds was first detailed in a letter sent by Sen. Ron Wyden (D-OR) and Reps. Pramila Jayapal (D-WA) and Sara Jacobs (D-CA) on December 11 to the secretary of the Department of Health and Human Services.

Pharmacies can hand over detailed, potentially compromising information due to legal fine print. Health Insurance Portability and Accountability Act (HIPAA) regulations restrict patient data sharing between “covered entities” like doctor offices, hospitals, and other medical facilities—but these guidelines are looser for pharmacies. And while search warrants require a judge’s approval to serve, subpoenas do not.

[…]

Given each company’s national network, patient records are often shared interstate between any pharmacy location. This could become legally fraught for medical history access within states that already have—or are working to enact—restrictive medical access laws. In an essay written for The Yale Law Journal last year, cited by WaPo, University of Connecticut associate law professor Carly Zubrzycki argued, “In the context of abortion—and other controversial forms of healthcare, like gender-affirming treatments—this means that cutting-edge legislative protections for medical records fall short.”

[…]

Source: Law enforcements can obtain prescription records from pharmacy giants without a warrant | Popular Science

Proposed US surveillance regime makes anyone with a modem a big brother spy. Choice is between full on spying and full on spying.

Under rules being considered, any telecom service provider or business with custodial access to telecom equipment – a hotel IT technician, an employee at a cafe with Wi-Fi, or a contractor responsible for installing home broadband router – could be compelled to enable electronic surveillance. And this would apply not only to those involved with data transit and data storage.

This week, the US House of Representatives is expected to conduct a floor vote on two bills that reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA), which is set to expire in 2024.

Section 702, as The Register noted last week, permits US authorities to intercept the electronic communications of people outside the US for foreign intelligence purposes – without a warrant – even if that communication involves US citizens and permanent residents.

As the Electronic Frontier Foundation argues, Section 702 has allowed the FBI to conduct invasive, warrantless searches of protesters, political donors, journalists, protesters, and even members of Congress.

More than a few people would therefore be perfectly happy if the law lapsed – on the other hand, law enforcement agencies insist they need Section 702 to safeguard national security.

The pending vote is expected to be conducted under “Queen-of-the-Hill Rules,” which in this instance might also be described as “Thunderdome” – two bills enter, one bill leaves, with the survivor advancing to the US Senate for consideration. The prospect that neither would be approved and Section 702 would lapse appears … unlikely.

The two bills are: HR 6570, the Protect Liberty and End Warrantless Surveillance Act; and HR 6611, the FISA Reform and Reauthorization Act (FRRA) of 2023 (FRRA).

The former reauthorizes Section 702, but with strong civil liberties and privacy provisions. The civil rights community has lined up to support it.

As for the latter, Elizabeth Goitein, co-director of the Liberty and National Security Program at legal think tank the Brennan Center for Justice, explained that the FRRA changes the definition of electronic communication service provider (ECSP) in a way that expands the range of businesses required to share data with the US.

“Going forward, it would not just be entities that have direct access to communications, like email and phone service providers, that could be required to turn over communications,” argues a paper prepared by the Brennan Center. “Any business that has access to ‘equipment’ on which communications are stored and transmitted would be fair game.”

According to Goitein, the bill’s sponsors have denied the language is intended to be interpreted so broadly.

A highly redacted FISA Court of Review opinion [PDF], released a few months ago, showed that the government has already pushed the bounds of the definition.

The court document discussed a petition to compel an unidentified entity to conduct surveillance. The petition was denied because the entity did not satisfy the definition of “electronic communication service provider,” and was instead deemed to be a provider of a product or service. That definition may change, it seems.

Goitein is not alone in her concern about the ECSP definition. She noted that a FISA Court amici – the law firm ZwillGen – has taken the unusual step of speaking out against the expanded definition of an ECSP.

In an assessment published last week, ZwillGen attorneys Marc Zwillinger and Steve Lane raised concerns about the FRRA covering a broad set of businesses and their employees.

“By including any ‘service provider’ – rather than any ‘other communication service provider’ – that has access not just to communications, but also to the ‘equipment that is being or may be used to transmit or store … communications,’ the expanded definition would appear to cover datacenters, colocation providers, business landlords, shared workspaces, or even hotels where guests connect to the internet,” they explained. They added that the addition of the term “custodian” to the service provider definition makes it apply to any third party providing equipment, storage – or even cleaning services.

The Brennan Center paper also raised other concerns – like the exemption for members of Congress from such surveillance. The FRRA bill requires the FBI to get permission from a member of Congress when it wants to conduct a query of their communications. No such courtesy is afforded to the people these members of Congress represent.

Goitein urged Americans to contact their representative and ask for a “no” vote on the FRRA and a “yes” on HR 6570, the Protect Liberty and End Warrantless Surveillance Act. ®

Source: Proposed US surveillance regime would enlist more businesses • The Register