Ring Spy Doorbell customers get measly $5.6 million in refunds in privacy settlement

In a 2023 complaint, the FTC accused the doorbell camera and home security provider of allowing its employees and contractors to access customers’ private videos. Ring allegedly used such footage to train algorithms without consent, among other purposes.

Ring was also charged with failing to implement key security protections, which enabled hackers to take control of customers’ accounts, cameras and videos. This led to “egregious violations of users’ privacy,” the FTC noted.

The resulting settlement required Ring to delete content that was found to be unlawfully obtained, establish stronger security protections

[…]

the FTC is sending 117,044 PayPal payments to impacted consumers who had certain types of Ring devices — including indoor cameras — during the timeframes that the regulators allege unauthorized access took place.

[…]

Earlier this year, the California-based company separately announced that it would stop allowing police departments to request doorbell camera footage from users, marking an end to a feature that had drawn criticism from privacy advocates.

Source: Ring customers get $5.6 million in refunds in privacy settlement | AP News

Considering the size of Ring and the size of the customer base, this is a very very light tap on the wrist for delivering poor security and something that spies on everything on the street.

Europol asks tech firms, governments to unencrypt your private messages

In a joint declaration of European police chiefs published over the weekend, Europol said it needs lawful access to private messages, and said tech companies need to be able to scan them (ostensibly impossible with E2EE implemented) to protect users. Without such access, cops fear they won’t be able to prevent “the most heinous of crimes” like terrorism, human trafficking, child sexual abuse material (CSAM), murder, drug smuggling and other crimes.

“Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish,” the declaration said. “They should not now.”

Not exactly true – most EU countries do not tolerate anyone opening your private (snail) mail without a warrant.

The joint statement, which was agreed to in cooperation with the UK’s National Crime Agency, isn’t exactly making a novel claim. It’s nearly the same line of reasoning that the Virtual Global Taskforce, an international law enforcement group founded in 2003 to combat CSAM online, made last year when Meta first first started talking about implementing E2EE on Messenger and Instagram.

While not named in this latest declaration itself [PDF], Europol said that its opposition to E2EE “comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.” The UK NCA made a similar statement in its comments on the Europol missive released over the weekend.

The declaration urges the tech industry not to see user privacy as a binary choice, but rather as something that can be assured without depriving law enforcement of access to private communications.

Not really though. And if law enforcement can get at it, then so can everyone else.

[…] Gail Kent, Meta’s global policy director for Messenger, said in December the E2EE debate is far more complicated than the child safety issue that law enforcement makes it out to be, and leaving an encryption back door in products for police to take advantage of would only hamper trust in its messaging products.

Kent said Meta’s E2EE implementation prevents client-side scanning of content, which has been one of the biggest complaints from law enforcement. Kent said even that technology would violate user trust, as it serves as a workaround to intrude on user privacy without compromising encryption – an approach Meta is unwilling to take, according to Kent’s blog post.

As was pointed out during previous attempts to undermine E2EE, not only would an encryption back door (client-side scanning or otherwise) provide an inroad for criminals to access secured information, it wouldn’t stop criminals from finding some other way to send illicit content without the prying eyes of law enforcement able to take a look.

[…]

“We don’t think people want us reading their private messages, so have developed safety measures that prevent, detect and allow us to take action against this heinous abuse, while maintaining online privacy and security,” a Meta spokesperson told us last year. “It’s misleading and inaccurate to say that encryption would have prevented us from identifying and reporting accounts … to the authorities.”

In other words, don’t expect Meta to cave on this one when it can develop a fancy new detection algorithm instead.

Source: Europol asks tech firms, governments to get rid of E2EE • The Register

And every time they come for your freedom whilst quoting child safety – look out.

EDPS warns of EU plans to spy on personal chat messages

This week, during the presentation of the 2023 annual review ( pdf ) , the European privacy supervisor EDPS again warned about European plans to monitor chat messages from European citizens. According to the watchdog, this leads to ‘irreversible surveillance’.

At the beginning of 2022, the European Commission came up with a proposal to inspect all chat messages and other communications from citizens for child abuse. In the case of end-to-end encrypted chat services, this should be done via client-side scanning.

The European Parliament voted against the proposal, but came up with its own proposal.

However, the European member states have not yet taken a joint position.

Already in 2022, the EDPS raised the alarm about the European Commission’s proposal to monitor citizens’ communications. It is seen as a serious risk to the fundamental rights of 450 million Europeans.

Source: EDPS warns of European plans to monitor chat messages – Emerce

Sure, so the EU is not much of a democracy with the European Council (which is where the actual power is) not being elected at all, but that doesn’t mean it has to be a surveillance police state.

US Hospital Websites Almost All Give your Data to 3rd parties, but Many just don’t tell you about it

 In this cross-sectional analysis of a nationally representative sample of 100 nonfederal acute care hospitals, 96.0% of hospital websites transmitted user information to third parties, whereas 71.0% of websites included a publicly accessible privacy policy. Of 71 privacy policies, 40 (56.3%) disclosed specific third-party companies receiving user information.

[…]

Of 100 hospital websites, 96 […] transferred user information to third parties. Privacy policies were found on 71 websites […] 70 […] addressed how collected information would be used, 66 […] addressed categories of third-party recipients of user information, and 40 […] named specific third-party companies or services receiving user information.

[…]

In this cross-sectional study of a nationally representative sample of 100 nonfederal acute care hospitals, we found that although 96.0% of hospital websites exposed users to third-party tracking, only 71.0% of websites had an available website privacy policy. Polices averaged more than 2500 words in length and were written at a college reading-level. Given estimates that more than one-half of adults in the US lack literacy proficiency and that the average patient in the US reads at a grade 8 level, the length and complexity of privacy policies likely pose substantial barriers to users’ ability to read and understand them.27,32

[…]

Only 56.3% of policies (and only 40 hospitals overall) identified specific third-party recipients. Named third-parties tended to be companies familiar to users, such as Google. This lack of detail regarding third-party data recipients may lead users to assume that they are being tracked only by a small number of companies that they know well, when, in fact, hospital websites included in this study transferred user data to a median of 9 domains.

[…]

In addition to presenting risks for users, inadequate privacy policies may pose risks for hospitals. Although hospitals are generally not required under federal law to have a website privacy policy that discloses their methods of collecting and transferring data from website visitors, hospitals that do publish website privacy policies may be subject to enforcement by regulatory authorities like the Federal Trade Commission (FTC).33 The FTC has taken the position that entities that publish privacy policies must ensure that these policies reflect their actual practices.34 For example, entities that promise they will delete personal information upon request but fail to do so in practice may be in violation of the FTC Act.34

[…]

Source: User Information Sharing and Hospital Website Privacy Policies | Ethics | JAMA Network Open | JAMA Network

Dutch investigation into Android smartphones leads to new lawsuit against Google Play Services Constant Surveillance

The Mass Damage & Consumer Foundation today announced that it has initiated a class action lawsuit against Google over its Android operating system. The reason is a new study that shows how Dutch Android smartphones systematically transfer large amounts of information about device use to Google. Even with the most privacy-friendly options enabled, user data cannot be prevented from ending up on Google’s servers. According to the foundation, this is not clear to Android users, let alone whether they have given permission for this.

For the research, a team of scientists purchased several Android phones between 2022 and 2024 and captured, decrypted and analyzed the outgoing traffic on a Dutch server. This shows that a bundle of processes called ‘Google Play Services’ runs silently in the background and cannot be disabled or deleted. These processes continuously record what happens on and around the phone. For example, Google shares which apps someone uses, products they order and even whether users are sleeping.

More than nine million Dutch people

The Mass Damage & Consumer Foundation states that Google’s conduct violates a large number of Dutch and European rules that must protect consumers. The foundation wants to use a lawsuit to force Google to implement fundamental (privacy) changes to the Android platform and to offer an opt-out option for every form of data it collects, not just a few.

[…]

Identity can be easily traced

The research paid specific attention to the use of unique identifiers (UIDs). These are characteristics that Google can link to the collected data, such as an e-mail address or Android ID, a unique serial number with which someone is known to Google. The use of these features is sensitive. For example, Google advises against the use of unique features in its own guidelines for app developers: users could unintentionally be tracked across multiple apps. However, one or more of these unique features were found in the data transmissions examined – without exception. The researchers point out that this makes it easy to trace someone’s identity to virtually everything that happens on and around an Android device.

[…]

Source: Dutch investigation into Android smartphones leads to new lawsuit against Google – Mass Damage & Consumer Foundation

Academics Try to Figure Out Apple’s default apps Privacy Settings and Fail

A study has concluded that Apple’s privacy practices aren’t particularly effective, because default apps on the iPhone and Mac have limited privacy settings and confusing configuration options.

The research was conducted by Amel Bourdoucen and Janne Lindqvist of Aalto University in Finland. The pair noted that while many studies had examined privacy issues with third-party apps for Apple devices, very little literature investigates the issue in first-party apps – like Safari and Siri.

The aims of the study [PDF] were to investigate how much data Apple’s own apps collect and where it’s sent, and to see if users could figure out how to navigate the landscape of Apple’s privacy settings.

[…]

“Our work shows that users may disable default apps, only to discover later that the settings do not match their initial preference,” the paper states.

“Our results demonstrate users are not correctly able to configure the desired privacy settings of default apps. In addition, we discovered that some default app configurations can even reduce trust in family relationships.”

The researchers criticize data collection by Apple apps like Safari and Siri, where that data is sent, how users can (and can’t) disable that data tracking, and how Apple presents privacy options to users.

The paper illustrates these issues in a discussion of Apple’s Siri voice assistant. While users can ostensibly choose not to enable Siri in the initial setup on macOS-powered devices, it still collects data from other apps to provide suggestions. To fully disable Siri, Apple users must find privacy-related options across five different submenus in the Settings app.

Apple’s own documentation for how its privacy settings work isn’t good either. It doesn’t mention every privacy option, explain what is done with user data, or highlight whether settings are enabled or disabled. Also, it’s written in legalese, which almost guarantees no normal user will ever read it.

[…]

The authors also conducted a survey of Apple users and quizzed them on whether they really understood how privacy options worked on iOS and macOS, and what apps were doing with their data.

While the survey was very small – it covered just 15 respondents – the results indicated that Apple’s privacy settings could be hard to navigate.

Eleven of the surveyed users were well aware about data tracking and that it was mostly on by default. However, when informed about how privacy options work in iOS and macOS, nine of the surveyed users were surprised about the scope of data collection.

[…]

Users were also tested on their knowledge of privacy settings for eight default apps – including Siri, Family Sharing, Safari, and iMessage. According to the study, none could confidently figure out how to work their way around the Settings menu to completely disable default apps. When confused, users relied on searching the internet for answers, rather than Apple’s privacy documentation.

[…]

Assuming Apple has any interest in fixing these shortcomings, the team made a few suggestions. Since many users first went to operating system settings instead of app-specific settings when attempting to disable data tracking, a change could assist users. Centralizing these options would also prevent users from getting frustrated and giving up on finding the settings they’re looking for.

Informing users what specific settings do would also be an improvement – many settings are labelled with just a name, but no further details. The researchers suggest replacing Apple’s jargon-filled privacy policy with descriptions that are in the settings menu itself, and maybe even providing some infographic illustrations as well. Anything would be better than legalese.

While this study probably won’t convince Apple to change its ways, lawsuits might have better luck. Apple has been sued multiple times for not transparently disclosing its data tracking. One of the latest suits calls out Apple’s broken promises about privacy, claiming that “Apple does not honor users’ requests to restrict data sharing.”

[…]

Reminder: Apple has a multi-billion-dollar online ads business that it built while strongly criticizing Facebook and others for their privacy practices.

Source: Academics reckon Apple’s default apps have privacy pitfalls • The Register

Roku’s New Idea to Show You Ads When You Pause Your Video Game and spy on the content on your hdmi cable Is Horrifying

[…]

Roku describes its idea in a patent application, which largely flew under the radar when it was filed in November, and was recently spotted by the streaming newsletter Lowpass. In the application, Roku describes a system that’s able to detect when users pause third-party hardware and software and show them ads during that time.

According to the company, its new system works via an HDMI connection. This suggests that it’s designed to target users who play video games or watch content from other streaming services on their Roku TVs. Lowpass described Roku’s conundrum perfectly:

“Roku’s ability to monetize moments when the TV is on but not actively being used goes away when consumers switch to an external device, be it a game console or an attached streaming adapter from a competing manufacturer,” Janko Roettgers, the newsletter’s author, wrote. “Effectively, HDMI inputs have been a bit of a black box for Roku.”

In addition, Roku wouldn’t just show you any old ads. The company states that its innovation can recognize the content that users have paused and deliver customized related ads. Roku’s system would do this by using audio or video-recognition technologies to analyze what the user is watching or analyze the content’s metadata, among other methods.

[…]

In the case of gaming, there’s also the danger of Roku mistaking a long moment of pondering for a pause and sticking an ad right when you’re getting ready to face the final boss. The company is aware of this potential failure and points out that its system will monitor the frames of the content being watched to ensure there was a phase. It also plans on using other methods, such as analyzing the audio feed on the TV for extended moments of silence, to confirm there has been a pause.

[…]

Source: Roku’s New Idea to Show You Ads When You Pause Your Video Game Is Horrifying

Google will delete data collected from private browsing

In hopes of settling a lawsuit challenging its data collection practices, Google has agreed to destroy web browsing data it collected from users browsing in Chrome’s private modes – which weren’t as private as you might have thought.

The lawsuit [PDF], filed in June, 2020, on behalf of plaintiffs Chasom Brown, Maria Nguyen, and William Byatt, sought to hold Google accountable for making misleading statements about privacy.

[…]

“Despite its representations that users are in control of what information Google will track and collect, Google’s various tracking tools, including Google Analytics and Google Ad Manager, are actually designed to automatically track users when they visit webpages – no matter what settings a user chooses,” the complaint claims. “This is true even when a user browses in ‘private browsing mode.'”

Chrome’s Incognito mode only provides privacy in the client by not keeping a locally stored record of the user’s browsing history. It does not shield website visits from Google.

[…]

During the discovery period from September 2020 through March 2022, Google produced more than 5.8 million pages of documents. Even so, it was sanctioned nearly $1 million in 2022 by Magistrate Judge Susan van Keulen – for concealing details about how it can detect when Chrome users employ Incognito mode.

What the plaintiffs’ legal team found might have been difficult to explain at trial.

“Google employees described Chrome Incognito Mode as ‘misleading,’ ‘effectively a lie,’ a ‘confusing mess,’ a ‘problem of professional ethics and basic honesty,’ and as being ‘bad for users, bad for human rights, bad for democracy,'” according to the declaration [PDF] of Mark C Mao, a partner with the law firm of Boies Schiller Flexner LLP, which represents the plaintiffs.

[…]

On December 26 last year the plaintiffs and Google agreed to settle the case. The plaintiffs’ attorneys have suggested the relief provided by the settlement is worth $5 billion – but nothing will be paid, yet.

The settlement covers two classes of people: one of which excludes those using Incognito mode while logged into their Google Account:

  • Class 1: All Chrome browser users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in “Incognito mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.
  • Class 2: All Safari, Edge, and Internet Explorer users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in a “private browsing mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.

The settlement [PDF] requires that Google: inform users that it collects private browsing data, both in its Privacy Policy and in an Incognito Splash Screen; “must delete and/or remediate billions of data records that reflect class members’ private browsing activities”; block third-party cookies in Incognito mode for the next five years (separately, Google is phasing out third-party cookies this year); and must delete the browser signals that indicate when private browsing mode is active, to prevent future tracking.

[…]

The class of affected people has been estimated to number about 136 million.

 

Source: Google will delete data collected from private browsing • The Register

The Digital Identity Wallet approved by parliament and council

On the 28th February, The European Parliament gave its final approval to the Digital Identity Regulation, with 335 votes to 190, with 31 abstentions. It was adopted by the EU Council of Ministers on 26th of March. The next step will be its publication in the Official Journal and its entry into force 20 days later.

The regulation introduces the EU Digital Identity Wallet, which will allow citizens to identify and authenticate themselves online to a range of public and private services, as well as store and share digital documents. Wallet users will also be able to create free digital signatures.

The EU Digital Identity Wallet will be used on a voluntary basis, and no one can be discriminated against for not using the wallet. The wallet will be open-source, to further encourage transparency, innovation, and enhance security.

Find out more about the history of the regulation and the project here.

Open-source code and new version of the ARF released for public feedback.

The open-source code of the EU Digital Identity Wallet, and the latest version of the Architecture and Reference Framework (ARF) are now available on our Github.

Version 1.3 of the ARF is now available to the public, to gather feedback before its adoption by the expert group. The ARF outlines how wallets distributed by Member States will function and contains a high level overview of the standards and practices that are needed to build the wallet.

The open-source code of the wallet (also referred to as the reference implementation) is built on the specifications outlined in the ARF. It is based on a modular architecture composed of a set of business agnostic, reusable components which will evolve in incremental steps and can be reused across multiple projects.

[…]

Large Scale Pilot projects are currently test driving the many use cases of the EU Digital Identity Wallet in the real world.

Discover the Large Scale Pilots

Source: The Digital Identity Wallet is now on its way – EU Digital Identity Wallet –

This is an immensely complex project which is very very important to get right. I am very curious if they did.

Soofa Digital Kiosks Snatch Your Phone’s Data When You Walk By, sell it on

Digital kiosks from Soofa seem harmless, giving you bits of information alongside some ads. However, these kiosks popping up throughout the United States take your phone’s information and location data whenever you walk near them, and sell them to local governments and advertisers, first reported by NBC Boston Monday.

“At Soofa, we developed the first pedestrian impressions sensor that measures accurate foot traffic in real-time,” says a page on the company’s website. “Soofa advertisers can check their analytics dashboard anytime to see how their campaigns are tracking towards impressions goals.”

While data tracking is commonplace online, it’s becoming more pervasive in the real world. Whenever you walk past a Soofa kiosk, it collects your phone’s unique identifier (MAC address), manufacturer, and signal strength. This allows it to track anyone who walks within a certain, unspecified range. It then creates a dashboard to share with advertisers and local governments to display analytics about how many people are walking and engaging with its billboards.

This can offer local cities new ways to understand how people use public spaces, and how many people are reading notices posted on these digital kiosks. However, it also gives local governments detailed information on how people move throughout society and raises a question of how this data is being used.

[…]

A Soofa spokesperson said it does not share data with any 3rd parties in an email to Gizmodo, and it only offers the dashboard to an organization that bought the kiosk. The company also claims to anonymize your MAC address by the time it gets to advertisers and local governments.

However, Soofa also tells advertisers how to effectively use your location data on its website. It notes that advertisers can track when you’ve been near a physical billboard or kiosk in the real world based on location data. Then, using cookies, the advertisers can send you more digital ads later on. While Soofa didn’t invent this technique, it certainly seems to be promoting it.

[…]

Source: These Digital Kiosks Snatch Your Phone’s Data When You Walk By

Mass claim CUIC against virus scanner (but really tracking sypware) Avast

Privacy First has teamed up with Austrian NOYB (the organisation of privacy activist Max Schrems) to form the new mass claim organisation CUIC founded. CUIC stands for Consumers United in Court, also pronounceable as ‘CU in Court’ (see you in court).

[…]

Millions spied on by virus scanner

CUIC today filed subpoenas against software company Avast that made virus scanners that illegally collected the browsing behaviour of millions of people on computer, tablet or phone, including in the Netherlands. This data was then resold to other companies through an Avast subsidiary for millions of euros. This included data about users’ health, locations visited, political affiliation, religious beliefs, sexual orientation or economic situation. This information was linked to each specific user through unique user IDs. In a press release articulates CUIC president Wilmar Hendriks today as follows: “People thought they were safe with a virus scanner, but its very creator tracked everything they did on their computers. Avast sold this information to third parties for big money. They even advertised the goldmine of data they had captured. Companies like Avast should not be allowed to get away with this. That is why we are bringing this lawsuit. Those who won’t hear should feel.”

Fines

Back in March 2023, the Czech privacy regulator (UOOU) concluded that Avast violated the AVG and fined the company approximately €13.7 million. The US federal consumer authority, the Federal Trade Commission (FTC), also recently ordered Avast to pay USD16.5 million in compensation to users and ordered it to stop selling or making collected data available to third parties, delete that collected data and implement a comprehensive privacy programme.

The lawsuit for which CUIC today sued Avast should lead to compensation for users in the Netherlands

[…]

Source: Mass claim CUIC against virus scanner Avast launched – Privacy First

Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The fundamental flaw with the age verification bills and laws passing rapidly across the country is the delusional, unfounded belief that putting hurdles between people and pornography is going to actually prevent them from viewing porn. What will happen, and is already happening, is that people–including minors–will go to unmoderated, actively harmful alternatives that don’t require handing over a government-issued ID to see people have sex. Meanwhile, performers and companies that are trying to do the right thing will suffer.

[…]

Source: Age Verification Laws Drag Us Back to the Dark Ages of the Internet

The legislators passing these bills are doing so under the guise of protecting children, but what’s actually happening is a widespread rewiring of the scaffolding of the internet. They ignore long-established legal precedent that has said for years that age verification is unconstitutional, eventually and inevitably reducing everything we see online without impossible privacy hurdles and compromises to that which is not “harmful to minors.” The people who live in these states, including the minors the law is allegedly trying to protect, are worse off because of it. So is the rest of the internet.
Yet new legislation is advancing in Kentucky and Nebraska, while the state of Kansas just passed a law which even requires age-verification for viewing “acts of homosexuality,” according to a report: Websites can be fined up to $10,000 for each instance a minor accesses their content, and parents are allowed to sue for damages of at least $50,000. This means that the state can “require age verification to access LGBTQ content,” according to attorney Alejandra Caraballo, who said on Threads that “Kansas residents may soon need their state IDs” to access material that simply “depicts LGBTQ people.”
One newspaper opinion piece argues there’s an easier solution: don’t buy your children a smartphone: Or we could purchase any of the various software packages that block social media and obscene content from their devices. Or we could allow them to use social media, but limit their screen time. Or we could educate them about the issues that social media causes and simply trust them to make good choices. All of these options would have been denied to us if we lived in a state that passed a strict age verification law. Not only do age verification laws reduce parental freedom, but they also create myriad privacy risks. Requiring platforms to collect government IDs and face scans opens the door to potential exploitation by hackers and enemy governments. The very information intended to protect children could end up in the wrong hands, compromising the privacy and security of millions of users…

Ultimately, age verification laws are a misguided attempt to address the complex issue of underage social media use. Instead of placing undue burdens on users and limiting parental liberty, lawmakers should look for alternative strategies that respect privacy rights while promoting online safety.
This week a trade association for the adult entertainment industry announced plans to petition America’s Supreme Court to intervene.

Source: Slashdot

This is one of the many problems caused by an America that is suddenly so very afraid of sex, death and politics.

Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat, Youtube, Amazon through Onavo VPN

Court filings unsealed last week allege Meta created an internal effort to spy on Snapchat in a secret initiative called “Project Ghostbusters.” Meta did so through Onavo, a Virtual Private Network (VPN) service the company offered between 2016 and 2019 that, ultimately, wasn’t private at all.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted we have no analytics about them,” said Mark Zuckerberg in an email to three Facebook executives in 2016, unsealed in Meta’s antitrust case on Saturday. “It seems important to figure out a new way to get reliable analytics about them… You should figure out how to do this.”

Thus, Project Ghostbusters was born. It’s Meta’s in-house wiretapping tool to spy on data analytics from Snapchat starting in 2016, later used on YouTube and Amazon. This involved creating “kits” that can be installed on iOS and Android devices, to intercept traffic for certain apps, according to the filings. This was described as a “man-in-the-middle” approach to get data on Facebook’s rivals, but users of Onavo were the “men in the middle.”

[…]

A team of senior executives and roughly 41 lawyers worked on Project Ghostbusters, according to court filings. The group was heavily concerned with whether to continue the program in the face of press scrutiny. Facebook ultimately shut down Onavo in 2019 after Apple booted the VPN from its app store.

Prosecutors also allege that Facebook violated the United States Wiretap Act, which prohibits the intentional procurement of another person’s electronic communications.

[…]

Prosecutors allege Project Ghostbusters harmed competition in the ad industry, adding weight to their central argument that Meta is a monopoly in social media.

Source: Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat

Who would have thought that a Facebook VPN was worthless? Oh, I have been reporting on this since 2018

General Motors Quits Sharing Driving Behavior With Data Brokers – Now sells it directly to insurance companies?

General Motors said Friday that it had stopped sharing details about how people drove its cars with two data brokers that created risk profiles for the insurance industry.

The decision followed a New York Times report this month that G.M. had, for years, been sharing data about drivers’ mileage, braking, acceleration and speed with the insurance industry. The drivers were enrolled — some unknowingly, they said — in OnStar Smart Driver, a feature in G.M.’s internet-connected cars that collected data about how the car had been driven and promised feedback and digital badges for good driving.

Some drivers said their insurance rates had increased as a result of the captured data, which G.M. shared with two brokers, LexisNexis Risk Solutions and Verisk. The firms then sold the data to insurance companies.

Since Wednesday, “OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk,” a G.M. spokeswoman, Malorie Lucich, said in an emailed statement. “Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies.”

Romeo Chicco, a Florida man whose insurance rates nearly doubled after his Cadillac collected his driving data, filed a complaint seeking class-action status against G.M., OnStar and LexisNexis this month.

An internal document, reviewed by The Times, showed that as of 2022, more than eight million vehicles were included in Smart Driver. An employee familiar with the program said the company’s annual revenue from Smart Driver was in the low millions of dollars.

Source: General Motors Quits Sharing Driving Behavior With Data Brokers – The New York Times

No mention of who it is now selling the data to.

VPN Demand Surges 234.8% After Adult Site Restriction on Texas-Based Users

VPN demand in Texas skyrocketed by 234.8% on March 15, 2024, after state authorities enacted a law requiring adult sites to verify users’ ages before granting them access to the websites’ content.

Texas’ age verification law was passed in June 2023 and was set to take effect in September of the same year. However, a day before its implementation, a US district judge temporarily blocked enforcement after a lawsuit filed by the Free Speech Coalition (FSC) deemed the policy unconstitutional per the First Amendment.

On March 14, 2024, the US Court of Appeals for the 5th Circuit decreed that Texas could proceed with the law’s enactment.

As a sign of protest, Pornhub, the most visited adult site in the US, blocked IP addresses from Texas — the eighth state to suffer such a ban after their respective governments enforced similar restrictions on adult sites.

[…]

Following the law’s enactment, users in Texas seem to be scrambling for means to access the affected adult sites. vpnMentor’s research team analyzed user demand data and found a 234.8% increase in VPN demand in the state.

The graph below shows the VPN demand in Texas from March 1 to March 16.

Past VPN Demand Growths from Adult Site Restrictions

Pornhub has previously blocked IP addresses from Louisiana, Mississippi, Arkansas, Utah, Virginia, North Carolina, and Montana — all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state. That same year, the passing of adult-site-related age restriction laws in Louisiana and Mississippi led to a 200% and 72% surge in VPN interest, respectively.

Source: VPN Demand Surges Post Adult Site Restriction on Texas-Based Users

Pornhub disables website in Texas after AG sues for not verifying users’ ages

Pornhub has disabled its site in Texas to object to a state law that requires the company to verify the age of users to prevent minors from accessing the site.

Texas residents who visit the site are met with a message from the company that criticizes the state’s elected officials who are requiring them to track the age of users.

The company said the newly passed law impinges on “the rights of adults to access protected speech” and fails to pass strict scrutiny by “employing the least effective and yet also most restrictive means of accomplishing Texas’s stated purpose of allegedly protecting minors.”

Pornhub said safety and compliance are “at the forefront” of the company’s mission, but having users provide identification every time they want to access the site is “not an effective solution for protecting users online.” The adult content website argues the restrictions instead will put minors and users’ privacy at risk.

[…]

The announcement from Pornhub follows the news that Texas Attorney General Ken Paxton (R) was suing Aylo, the pornography giant that owns Pornhub, for not following the newly enacted age verification law.

Paxton’s lawsuit is looking to have Aylo pay up to $1,600,000, from mid-September of last year to the date of the filing of the lawsuit and an additional $10,000 each day since filing.

[…]

Paxton released a statement on March 8, calling the ruling an “important victory.” The court ruled that the age verification requirement does not violate the First Amendment, Paxton said, saying he won in the fight against Pornhub and other pornography companies.

The state Legislature passed the age verification law last year, requiring companies that distribute sexual material that could be harmful to minors to confirm users to the platform are older than 18 years. The law asks users to provide government-issued identification or public or private data to verify they are of age to access the site.

 

Source: Pornhub disables website in Texas after AG sues for not verifying users’ ages | The Hill

Age verification is not only easily bypassed, but also extremely sensitive due to the nature of the documents you need to upload to the verification agency. Big centralised databases get hacked all the time and this one would be a massive target, also leaving people in it potentially open to blackmail, as they would be linked to a porn site – which for some reason Americans find problematic.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies

car with eye in sky

Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident. So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor. LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act. What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car. On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month. “It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.” In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in apatent application (PDF) that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.

Sometimes this is happening with a driver’s awareness and consent. Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance — where rates are set based on monitoring of their driving habits — it’s easy to collect that data wirelessly from their cars. But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis. Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read. Especially troubling is that some drivers with vehicles made by G.M. say they were tracked even when they did not turn on the feature — called OnStar Smart Driver — and that their insurance rates went up as a result.

European Commission broke data protection law with Microsoft Office 365 – duh

The European Commission has been reprimanded for infringing data protection regulations when using Microsoft 365.

The rebuke came from the European Data Protection Supervisor (EDPS) and is the culmination of an investigation that kicked off in May 2021, following the Schrems II judgement.

According to the EDPS, the EC infringed several data protection regulations, including rules around transferring personal data outside the EU / European Economic Area (EEA.)

According to the organization, “In particular, the Commission has failed to provide appropriate safeguards to ensure that personal data transferred outside the EU/EEA are afforded an essentially equivalent level of protection as guaranteed in the EU/EEA.

“Furthermore, in its contract with Microsoft, the Commission did not sufficiently specify what types of personal data are to be collected and for which explicit and specified purposes when using Microsoft 365.”

While the concerns are more about EU institutions and transparency, they should also serve as notice to any company doing business in the EU / EEA to take a very close look at how it has configured Microsoft 365 regarding the EU Data Protection Regulations.

[…]

Source: European Commission broke data protection law with Microsoft • The Register

Who knew? An American Company running an American cloud product on American Servers and the EU was putting it’s data on it. Who would have thought that might end up in America?!

Biden executive order aims to stop a few countries from buying Americans’ personal data – a watered down EU GDPR

[…]

President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly.

[…]

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

Source: Biden executive order aims to stop Russia and China from buying Americans’ personal data

Too little, not enough, way way way too late.

Investigators seek push notification metadata in 130 cases – this is scarier than you think

More than 130 petitions seeking access to push notification metadata have been filed in US courts, according to a Washington Post investigation – a finding that underscores the lack of privacy protection available to users of mobile devices.

The poor state of mobile device privacy has provided US state and federal investigators with valuable information in criminal investigations involving suspected terrorism, child sexual abuse, drugs, and fraud – even when suspects have tried to hide their communications using encrypted messaging.

But it also means that prosecutors in states that outlaw abortion could demand such information to geolocate women at reproductive healthcare facilities. Foreign governments may also demand push notification metadata from Apple, Google, third-party push services, or app developers for their own criminal investigations or political persecutions. Concern has already surfaced that they may have done so for several years.

In December 2023, US senator Ron Wyden (D-OR) sent a letter to the Justice Department about a tip received by his office in 2022 indicating that foreign government agencies were demanding smartphone push notification records from Google and Apple.

[…]

Apple and Google operate push notification services that relay communication from third-party servers to specific applications on iOS and Android phones. App developers can encrypt these messages when they’re stored (in transit they’re protected by TLS) but the associated metadata – the app receiving the notification, the time stamp, and network details – is not encrypted.

[…]

push notification metadata is extremely valuable to marketing organizations, to app distributors like Apple and Google, and also to government organizations and law enforcement agencies.

“In 2022, one of the largest push notification companies in the world, Pushwoosh, was found to secretly be a Russian company that deceived both the CDC and US Army into installing their technology into specific government apps,” said Edwards.

“These types of scandals are the tip of the iceberg for how push notifications can be abused, and why countless serious organizations focus on them as a source of intelligence,” he explained.

“If you sign up for push notifications, and travel around to unique locations, as the messages hit your device, specific details about your device, IP address, and location are shared with app stores like Apple and Google,” Edwards added. “And the push notification companies who support these services typically have additional details about users, including email addresses and user IDs.”

Edwards continued that other identifiers may further deprive people of privacy, noting that advertising identifiers can be connected to push notification identifiers. He pointed to Pushwoosh as an example of a firm that built its push notification ID using the iOS advertising ID.

“The simplest way to think about push notifications,” he said, is “they are just like little pre-scheduled messages from marketing vendors, sent via mobile apps. The data that is required to ‘turn on any push notification service’ is quite invasive and can unexpectedly reveal/track your location/store your movement with a third-party marketing company or one of the app stores, which is merely a court order or subpoena away from potentially exposing those personal details.”

Source: Investigators seek push notification metadata in 130 cases • The Register

Also see: Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

Scammers Are Now Scanning Faces To Defeat Age verification Biometric Security Measures

For quite some time now we’ve been pointing out the many harms of age verification technologies, and how they’re a disaster for privacy. In particular, we’ve noted that if you have someone collecting biometric information on people, that data itself becomes a massive risk since it will be targeted.

And, remember, a year and a half ago, the Age Verification Providers Association posted a comment right here on Techdirt saying not to worry about the privacy risks, as all they wanted to do was scan everyone’s face to visit a website (perhaps making you turn to the left or right to prove “liveness”).

Anyway, now a report has come out that some Chinese hackers have been tricking people into having their faces scanned, so that the hackers can then use the resulting scan to access accounts.

Attesting to this, cybersecurity company Group-IB has discovered the first banking trojan that steals people’s faces. Unsuspecting users are tricked into giving up personal IDs and phone numbers and are prompted to perform face scans. These images are then swapped out with AI-generated deepfakes that can easily bypass security checkpoints

The method — developed by a Chinese-based hacking family — is believed to have been used in Vietnam earlier this month, when attackers lured a victim into a malicious app, tricked them into face scanning, then withdrew the equivalent of $40,000 from their bank account. 

Cool cool, nothing could possibly go wrong in now requiring more and more people to normalize the idea of scanning your face to access a website. Nothing at all.

And no, this isn’t about age verification, but still, the normalization of facial scanning is a problem, as it’s such an obvious target for scammers and hackers.

Source: As Predicted: Scammers Are Now Scanning Faces To Defeat Biometric Security Measures | Techdirt

Meta will start collecting much more “anonymized” data about Quest headset usage

Meta will soon begin “collecting anonymized data” from users of its Quest headsets, a move that could see the company aggregating information about hand, body, and eye tracking; camera information; “information about your physical environment”; and information about “the virtual reality events you attend.”

In an email sent to Quest users Monday, Meta notes that it currently collects “the data required for your Meta Quest to work properly.” Starting with the next software update, though, the company will begin collecting and aggregating “anonymized data about… device usage” from Quest users. That anonymized data will be used “for things like building better experiences and improving Meta Quest products for everyone,” the company writes.

A linked help page on data sharing clarifies that Meta can collect anonymized versions of any of the usage data included in the “Supplemental Meta Platforms Technologies Privacy Policy,” which was last updated in October. That document lists a host of personal information that Meta can collect from your headset, including:

  • “Your audio data, when your microphone preferences are enabled, to animate your avatar’s lip and face movement”
  • “Certain data” about hand, body, and eye tracking, “such as tracking quality and the amount of time it takes to detect your hands and body”
  • Fitness-related information such as the “number of calories you burned, how long you’ve been physically active, [and] your fitness goals and achievements”
  • “Information about your physical environment and its dimensions” such as “the size of walls, surfaces, and objects in your room and the distances between them and your headset”
  • “Voice interactions” used when making audio commands or dictations, including audio recordings and transcripts that might include “any background sound that happens when you use those services” (these recordings and transcriptions are deleted “immediately” in most cases, Meta writes)
  • Information about “your activity in virtual reality,” including “the virtual reality events you attend”

The anonymized collection data is used in part to “analyz[e] device performance and reliability” to “improve the hardware and software that powers your experiences with Meta VR Products.”

What does Meta know about what you're doing in VR?
Enlarge / What does Meta know about what you’re doing in VR?
Meta

Meta’s help page also lists a small subset of “additional data” that headset users can opt out of sharing with Meta. But there’s no indication that Quest users can opt out of the new anonymized data collection policies entirely.

These policies only seem to apply to users who make use of a Meta account to access their Quest headsets, and those users are also subject to Meta’s wider data-collection policies. Those who use a legacy Oculus account are subject to a separate privacy policy that describes a similar but more limited set of data-collection practices.

Not a new concern

Meta is clear that the data it collects “is anonymized so it does not identify you.” But here at Ars, we’ve long covered situations where data that was supposed to be “anonymous” was linked back to personally identifiable information about the people who generated it. The FTC is currently pursuing a case against Kochava, a data broker that links de-anonymized geolocation data to a “staggering amount of sensitive and identifying information,” according to the regulator.

Concerns about VR headset data collection dates back to when Meta’s virtual reality division was still named Oculus. Shortly after the launch of the Oculus Rift in 2016, Senator Al Franken (D-Minn.) sent an open letter to the company seeking information on “the extent to which Oculus may be collecting Americans’ personal information, including sensitive location data, and sharing that information with third parties.”

In 2020, the company then called Facebook faced controversy for requiring Oculus users to migrate to a Facebook account to continue using their headsets. That led to a temporary pause of Oculus headset sales in Germany before Meta finally offered the option to decouple its VR accounts from its social media accounts in 2022.

Source: Meta will start collecting “anonymized” data about Quest headset usage | Ars Technica

Canadian college M&M Vending machines secretly scanning faces – revealed by error message

[…]

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).
Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

[…]

Source: Vending machine error reveals secret face image database of college students | Ars Technica

European human rights court says backdooring encrypted comms is against human rights

a picture of an eye staring at your from your mobile phone

The European Court of Human Rights (ECHR) has ruled that laws requiring crippled encryption and extensive data retention violate the European Convention on Human Rights – a decision that may derail European data surveillance legislation known as Chat Control.

The Court issued a decision on Tuesday stating that “the contested legislation providing for the retention of all internet communications of all users, the security services’ direct access to the data stored without adequate safeguards against abuse and the requirement to decrypt encrypted communications, as applied to end-to-end encrypted communications, cannot be regarded as necessary in a democratic society.”

The “contested legislation” mentioned above refers to a legal challenge that started in 2017 after a demand from Russia’s Federal Security Service (FSB) that messaging service Telegram provide technical information to assist the decryption of a user’s communication. The plaintiff, Anton Valeryevich Podchasov, challenged the order in Russia but his claim was dismissed.

In 2019, Podchasov brought the matter to the ECHR. Russia joined the Council of Europe – an international human rights organization – in 1996 and was a member until it withdrew in March 2022 following its illegal invasion of Ukraine. Because the 2019 case predates Russia’s withdrawal, the ECHR continued to consider the matter.

The Court concluded that the Russian law requiring Telegram “to decrypt end-to-end encrypted communications risks amounting to a requirement that providers of such services weaken the encryption mechanism for all users.” As such, the Court considers that requirement disproportionate to legitimate law enforcement goals.

While the ECHR decision is unlikely to have any effect within Russia, it matters to countries in Europe that are contemplating similar decryption laws – such as Chat Control and the UK government’s Online Safety Act.

Chat Control is shorthand for European data surveillance legislation that would require internet service providers to scan digital communications for illegal content – specifically child sexual abuse material and potentially terrorism-related information. Doing so would necessarily entail weakening the encryption that keeps communication private.

Efforts to develop workable rules have been underway for several years and continue to this day, despite widespread condemnation from academics, privacy-oriented orgs, and civil society groups.

Patrick Breyer, a member of the European parliament for the Pirate Party, hailed the ruling for demonstrating that Chat Control is incompatible with EU law.

“With this outstanding landmark judgment, the ‘client-side scanning’ surveillance on all smartphones proposed by the EU Commission in its chat control bill is clearly illegal,” said Breyer.

“It would destroy the protection of everyone instead of investigating suspects. EU governments will now have no choice but to remove the destruction of secure encryption from their position on this proposal – as well as the indiscriminate surveillance of private communications of the entire population!” ®

Source: European human rights court says no to weakened encryption • The Register

23andMe Thinks ‘Mining’ Your DNA Data Is Its Last Hope

23andMe is in a death spiral. Almost everyone who wants a DNA test already bought one, a nightmare data breach ruined the company’s reputation, and 23andMe’s stock is so close to worthless it might get kicked off the Nasdaq. CEO Anne Wojcicki is on a crisis tour, promising investors the company isn’t going out of business because she has a new plan: 23andMe is going to double down on mining your DNA data and selling it to pharmaceutical companies.

“We now have the ability to mine the dataset for ourselves, as well as to partner with other groups,” Wojcicki said in an interview with Wired. “It’s a real resource that we could apply to a number of different organizations for their own drug discovery.”

That’s been part of the plan since day one, but now it looks like it’s going to happen on a much larger scale. 23andMe has always coerced its customers into giving the company consent to share their DNA for “research,” a friendlier way of saying “giving it to pharmaceutical companies.” The company enjoyed an exclusive partnership with pharmaceutical giant GlaxoSmithKline, but apparently the drug maker already sucked the value out of your DNA, and that deal is running out. Now, 23andMe is looking for new companies who want to take a look at your genes.

[…]

the most exciting opportunity for “improvements” is that 23andMe and the pharmaceutical industry get to develop new drugs. There’s a tinge of irony here. Any discoveries that 23andMe makes come from studying DNA samples that you paid the company to collect.

[…]

The problem with 23andMe’s consumer-facing business is the company sells a product you only need once in a lifetime. Worse, the appeal of a DNA test for most people is the novelty of ancestry results, but if your brother already paid for a test, you already know the answers.

[…]

it’s spent years trying to brand itself as a healthcare service, and not just a $79 permission slip to tell people you’re Irish. In fact, the company thinks you should buy yourself a recurring annual subscription to something called 23andMe+ Total Health. It only costs $1,188 a year.

[…]

The secret is you just can’t learn a ton about your health from genetic screenings, aside from tests for specific diseases that doctors rarely order unless you have a family history.

[…]

What do you get with these subscriptions? It’s kind of vague. Depending on the package, they include a service that “helps you understand how genetics and lifestyle can impact your likelihood of developing certain conditions,” testing for rare genetic conditions, enhanced ancestry features, and more. Essentially, they’ll run genetic tests that you may not need. Then, they may or may not recommend that you talk to a doctor, because they can’t offer you actual medical care.

You could also skip the middleman and start with a normal conversation with your doctor, who will order genetic tests if you need them and bill your insurance company

[…]

If 23andMe company survives, the first step is going to be deals that give more companies access to look at your genetics than ever before. But if 23andMe goes out of business, it’ll get purchased or sold off for parts, which means other companies will get a look at your data anyway.

Source: 23andMe Admits ‘Mining’ Your DNA Data Is Its Last Hope

What this piece misses is the danger of whom the data is sold to – or if it is leaked (which it was). Insurance companies may refuse to insure you. Your DNA may be faked. Your unique and unchangeable identity – and those of your family – has been stolen.