23andMe is on the brink. What happens to all that genetic DNA data?

[…] The one-and-done nature of Wiles’ experience is indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse. Wiles and many of 23andMe’s 15 million other customers never returned. They paid once for a saliva kit, then moved on.

Shares of 23andMe are now worth pennies. The company’s valuation has plummeted 99% from its $6 billion peak shortly after the company went public in 2021.

As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

[…]

Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy.

[…]

When signing up for the service, about 80% of 23andMe’s customers have opted in to having their genetic data analyzed for medical research.

[…]

The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company’s customer data to develop new treatments for disease.

Anya Prince, a law professor at the University of Iowa’s College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist.

For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm.

[…]

According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data.

“I couldn’t go to GSK and say, ‘Hey, my sample was given to you — I want that taken out — if it was anonymized, right? Because they’re not going to re-identify it just to pull it out of the database,” Prince said.

[…]

the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement.

“Having to rely on a private company’s terms of service or bottom line to protect that kind of information is troubling — particularly given the level of interest we’ve seen from government actors in accessing such information during criminal investigations,” Eidelman said.

She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles.

“This has happened without people’s knowledge, much less their express consent,” Eidelman said.

[…]

Last year, the company was hit with a major data breach that it said affected 6.9 million customer accounts, including about 14,000 who had their passwords stolen.

[…]

Some analysts predict that 23andMe could go out of business by next year, barring a bankruptcy proceeding that could potentially restructure the company.

[…]

Source: What happens to all of 23andMe’s genetic DNA data? : NPR

For more fun reading about about this clusterfuck of a company and why giving away DNA data is a spectacularly bad idea:

License Plate Readers Are Creating a US-Wide Database of Cars – and political affiliation, planned parenthood and more

At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.

Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America.

[…]

These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations.

[…]

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data.

[…]

those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates.

[…]

“I searched for the word ‘believe,’ and that is all lawn signs. There’s things just painted on planters on the side of the road, and then someone wearing a sweatshirt that says ‘Believe.’” Weist says. “I did a search for the word ‘lost,’ and it found the flyers that people put up for lost dogs and cats.”

Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people’s personal political views and their homes can be recorded into vast databases that can be queried.

[…]

Over more than a decade, DRN has amassed more than 15 billion “vehicle sightings” across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month.

[…]

The system is partly fueled by DRN “affiliates” who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits.

In 2022, Weist became a certified private investigator in New York State. In doing so, she unlocked the ability to access the vast array of surveillance software accessible to PIs. Weist could access DRN’s analytics system, DRNsights, as part of a package through investigations company IRBsearch. (After Weist published an op-ed detailing her work, IRBsearch conducted an audit of her account and discontinued it.

[…]

While not linked to license plate data, one law enforcement official in Ohio recently said people should “write down” the addresses of people who display yard signs supporting Vice President Kamala Harris, the 2024 Democratic presidential nominee, exemplifying how a searchable database of citizens’ political affiliations could be abused.

[…]

In 2022, WIRED revealed that hundreds of US Immigration and Customs Enforcement employees and contractors were investigated for abusing similar databases, including LPR systems. The alleged misconduct in both reports ranged from stalking and harassment to sharing information with criminals.

[…]

 

Source: License Plate Readers Are Creating a US-Wide Database of More Than Just Cars | WIRED

Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws, are collecting photos, videos and voice recordings — taken inside customers’ houses — to train the company’s AI models.

The Chinese home robotics company, which sells a range of popular Deebot models in Australia, said its users are “willingly participating” in a product improvement program.

When users opt into this program through the Ecovacs smartphone app, they are not told what data will be collected, only that it will “help us strengthen the improvement of product functions and attached quality”. Users are instructed to click “above” to read the specifics, however there is no link available on that page.

Ecovacs’s privacy policy — available elsewhere in the app — allows for blanket collection of user data for research purposes, including:

– The 2D or 3D map of the user’s house generated by the device
– Voice recordings from the device’s microphone
— Photos or videos recorded by the device’s camera

“It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs…”

Source: Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI

Dutch oppose Hungary’s approach to EU child sexual abuse regulation – or total surveillance of every smart device

The Netherlands’ government and opposition are both against the latest version of the controversial EU regulation aimed at detecting online child sexual abuse material (CSAM), according to an official position and an open letter published on Tuesday (1 October).

The regulation, aimed at detecting online CSAM, has been criticised for potentially allowing the scanning of private messages on platforms such as WhatsApp or Gmail.

However, the latest compromise text, dated 9 September, limits detection to known material, among other changes. ‘Known’ material refers to content that has already been circulating and detected, in contrast to ‘new’ material that has not yet been identified.

The Hungarian presidency of the Council of the EU shared a partial general approach dated 24 September and seen by Euractiv, that mirrors the 9 September text but reduces the reevaluation period from five years to three for grooming and new CSAM.

Limiting detection to known material could hinder authorities’ ability to surveil massive amounts of communications, suggesting the change is likely an attempt to reconcile privacy concerns.

The Netherlands initially supported the proposal to limit detection to ‘known’ material but withdrew its support in early September, Euractiv reported.

On Tuesday (1 October), Amsterdam officially took a stance against the general approach, despite speculation last week suggesting the country might shift its position in favour of the regulation.

This is also despite the Dutch mostly maintaining that their primary concern lies with combating known CSAM – a focus that aligns with the scope of the latest proposal.

According to various statistics, the Netherlands hosts a significant amount of CSAM.

The Dutch had been considering supporting the proposal, or at least a “silent abstention” that might have weakened the blocking minority, signalling a shift since Friday (27 September), a source close to the matter told Euractiv.

While a change in the Netherlands’ stance could have affected the blocking minority in the EU Council, their current position now strengthens it.

If the draft law were to pass in the EU Council, the next stage would be interinstitutional negotiations, called trilogues, between the European Parliament, the Council of the EU, and the Commission to finalise the legislation.

Both the Dutch government and the opposition are against supporting the new partial general approach.

Opposition party GroenLinks-PvdA (Greens/EFA) published an open letter, also on Tuesday, backed by a coalition of national and EU-based private and non-profit organisations, urging the government to vote against the proposal.

According to the letter, the regulation will be discussed at the Justice and Home Affairs Council on 11 October, with positions coordinated among member states on 2 October.

Currently, an interim regulation allows companies to detect and report online CSAM voluntarily. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

The Dutch Secret Service opposed the draft regulation because “introducing a scan application on every mobile phone” with infrastructure to manage the scans would be a complex and extensive system that would introduce risks to digital resilience, according to a decision note.

Source: Dutch oppose Hungary’s approach to EU child sexual abuse regulation – Euractiv

To find out more about how invasive the proposed scanning feature is, look through the articles here: https://www.linkielist.com/?s=csam

Ford wants to listen in on you in your car to serve you ads as much as possible

ford cars with human ears on their doors driving on a highway

Someday soon, if Ford has its way, drivers and passengers may be bombarded with infotainment ads tailored to their personal and vehicle data.

This sure-to-please-everyone idea comes via a patent application [PDF] filed by Ford Global Technologies late last month that proposes displaying ads to drivers based on their destination, route, who’s in the car, and various other data points able to be collected by modern vehicles.

According to the patent application, infotainment advertising could be varied depending on the situation and user feedback. In one example, Ford supposes showing a visual ad to passengers every 10 minutes while on the highway, and if someone responds positively to audio ads, the system could ramp up the frequency, playing audio ads every five minutes.

Of course, simply playing more ads might frustrate people, which Ford seems to understand because the pending patent notes it would have to account for “a user’s natural inclination to seek minimal or no ads.”

In order to assure advertisers that user preference is ultimately circumvented, Ford said its proposed infotainment system would be designed to “intelligently schedule variable durations of ads, with playing time seeking to maximize company revenue while minimizing the impact on user experience.”

The system would also be able to listen to conversations so it could serve ads during lulls in chatter, ostensibly to be less intrusive while being anything but.

Given the rush by some automakers to turning their vehicles into subscription-based cars-as-a-service, egged on by the chip world, we’re not surprised by efforts to wring more money out of motorists, this time with adverts. We assume patent filings similar to Ford’s have been made.

Trust us!

Then there’s the fact that automakers aren’t terrific on privacy and safeguarding the kinds of info that are used to tailor ads. In September last year, Mozilla published a report on the privacy policies of several automakers whose connected vehicles harvest information about owners, finding that 25 major manufacturers – Ford among them – failed to live up to the Firefox maker’s standards.

Just a couple of months later, a Washington state appeals court ruled it was perfectly legal for vehicles to harvest text and call data from connected smartphones and store it all in memory.

US senators have urged the FTC to investigate several car makers for allegedly selling customer data unlawfully, though we note Ford is not among the companies accused in that matter.

That said, the patent application makes no mention of how the automaker would protect user data used to serve in-vehicle ads. A couple of other potentially privacy-infringing Ford patents from the past year are worth mentioning, too.

The ideas within a patent application should not be viewed as an indication of our product plans

In 2023, Ford filed a patent application for an embedded vehicle system that would automate vehicle repossession if car payments weren’t made. Over the summer, another application describes a system where vehicles monitor each other’s speeds, and if one detects a nearby car speeding, it could snap photos using onboard cameras and send the images, along with speed data, directly to police or roadside monitors. Neither have privacy advocates thrilled.

Bear in mind neither of those patents may ever see the production, and this advertising one might not make it past the “let’s file this patent before the competition just in case” stage of life, either. That’s even what Ford essentially told us.

“Submitting patent applications is a normal part of any strong business as the process protects new ideas and helps us build a robust portfolio of intellectual property,” a Ford spokesperson told The Register. “The ideas described within a patent application should not be viewed as an indication of our business or product plans.”

Ford also said it always puts customers first in development of new products and services, though didn’t directly answer questions about a lack of privacy assurances in the patent application. In any case, it may not actually happen. Until it does.

Source: Who wants in-car ads tailored to your journey, passengers? • The Register

Resistance to Hungarian presidency’s new push for child sexual abuse prevention regulation – because it’s a draconian spying law asking for 100% coverage of digital comms

Resistance to the Hungarian presidency’s approach to the EU’s draft law to combat online child sexual abuse material (CSAM) was still palpable during a member states’ meeting on Wednesday (4 September).

The Hungarian presidency of the Council of the EU aims to secure consensus on the proposed law to combat online child sexual abuse material (CSAM) by October, according to an EU diplomat and earlier reports by Politico.

Hungary has prepared a compromise note on the draft law, also reported by Contexte.

The note, presented at a meeting of ambassadors on Wednesday, seeks political guidance to make progress at the technical level, the EU diplomat told Euractiv.

With the voluntary regime expiring in mid-2026, most member states agree that urgent action is needed, the diplomat continued.

But some member states are still resistant to the Hungarian’s latest approach.

The draft law to detect and remove online child sexual abuse material (CSAM) was removed from the agenda of Thursday’s (20 June) meeting of the Committee of Permanent Representatives (COREPER), who were supposed to vote on it.

Sources close to the matter told Euractiv, that Poland and Germany remain opposed to the proposal, with smaller member states also voicing concerns, potentially forming a blocking minority.

Although France and the Netherlands initially supported the proposal, the Netherlands has since withdrawn its support, and Italy has indicated that the new proposal is moving in the right direction.

As a result, no agreement was reached to move forward.

Currently, an interim regulation allows companies to voluntarily detect and report online CSAM. Originally set to expire in 2024, this measure has been extended to 2026 to avoid a legislative gap, as the draft for a permanent law has yet to be agreed.

Hungary is expected to introduce a concrete textual proposal soon. The goal is to agree on its general approach by October, the EU diplomat said, a fully agreed position among member states which serves as the basis for negotiations with the European Parliament.

Meanwhile, the European Commission is preparing to send a detailed opinion to Hungary regarding the draft law, expected by 30 September, Contexte reported on Wednesday.

[…]

In the text, the presidency also suggested extending the temporary exemption from certain provisions of the ePrivacy Directive, which governs privacy and electronic communications, for new CSAM and grooming.

[…]

Source: Resistance lingers to Hungarian presidency’s new push for child sexual abuse prevention regulation – Euractiv

See also:

The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

EU delays decision over continuous spying on all your devices *cough* scanning encrypted messages for kiddie porn

Signal, MEPs urge EU Council to drop law that puts a spy on everyone’s devices

European human rights court says backdooring encrypted comms is against human rights

EU Commission’s nameless experts behind its “spy on all EU citizens” *cough* “child sexual abuse” law

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

 

Dutch DPA fines Clearview €30.5 million for violating the GDPR

Clearview AI is back in hot — and expensive — water, with the Dutch Data Protection Authority (DPA) fining the company €30.5 million ($33.6 million) for violating the General Data Protection Regulation (GDPR). The release explains that Clearview created “an illegal database with billions of photos of faces,” including Dutch individuals, and has failed to properly inform people that it’s using their data. In early 2023, Clearview’s CEO claimed the company had 30 billion images.

Clearview must immediately stop all violations or face up to €5.1 million ($5.6 million) in non-compliance penalties. “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world,” Dutch DPA chairman Aleid Wolfsen stated. “If there is a photo of you on the Internet — and doesn’t that apply to all of us? — then you can end up in the database of Clearview and be tracked.” He adds that facial recognition can help with safety but that “competent authorities” who are “subject to strict conditions” should handle it rather than a commercial company.

The Dutch DPA further states that since Clearview is breaking the law, using it is also illegal. Wolfsen warns that Dutch companies using Clearview could also be subject to “hefty fines.” Clearview didn’t issue an objection to the Dutch DPA’s fine, so it is unable to launch an appeal.

This fine is far from the first time an entity has stood up against Clearview. In 2020, the LAPD banned its use, and the American Civil Liberties Union (ACLU) sued Clearview, with the settlement ending sales of the biometric database to any private companies. Italy and the UK have previously fined Clearview €20 million ($22 million) and £7.55 million ($10 million), respectively, and instructed the company to delete any data of its residents. Earlier this year, the EU also barred Clearview from untargeted face scraping on the internet.

Source: Clearview faces a €30.5 million for violating the GDPR

Proposal to spy on all chat messages back on European political agenda

Europe is going to talk again about the possibility of checking all chat messages of citizens for child abuse. On September 4, a (secret) consultation will take place, says Patrick Breyer , former MEP for the Pirate Party.

A few years ago, the European Commission came up with the plan to monitor all chat messages of citizens. The European Parliament did not like the proposal of the European Commission and came up with its own proposal, which excludes monitoring of end-to-end encrypted services.

At the end of June, EU President Belgium came up with its own version of the proposal. Namely that only the uploading of photos, video and references to them would be checked. This proposal did not get enough votes.

Germany and Poland are the biggest opponents within the EU anyway. The Netherlands, Estonia, Slovenia, the Czech Republic and Austria would abstain from voting, according to Breyer.

A coalition of almost fifty civil society organisations, including the Dutch Offlimits, Bits of Freedom, Vrijschrift.org and ECNL, called on the European Commission in July to withdraw the chat control proposal and focus on measures that really protect children.

Source: Proposal to control chat messages back on European political agenda – Emerce

Guys, stop trying to be Big Brother in the EU – it changes how people behave and not for the better.

Mozilla removes telemetry service Adjust from mobile Firefox versions – it was spying on you secretly it turns out

Mozilla will soon remove its telemetry service Adjust from the Android and iOS versions of browsers Firefox and Firefox Focus. It appeared that the developer was collecting data on the effectiveness of Firefox ad campaigns without disclosing that.

Mozilla, the developers of Firefox, until recently used the telemetry service Adjust to collect data from its Firefox and Firefox Focus apps for both Android and iOS. Through this service, the company collected data on the number of installs of these specific apps following Mozilla’s ad campaigns.

[…]

The company’s actions may also result from previous complaints about the default enabling of ‘privacy-protecting ad metrics’ in Firefox. This option has been enabled by default since the July 9 release of Firefox 128.

The service collects data on how users respond to ads, which is shared aggregated with advertisers. Users can disable this option, however.

Mozilla says it regrets enabling such telemetry but defends the reason for turning it on by default. According to the browser provider, advertisers’ desire for information about the effectiveness of their campaigns is very difficult to escape.

[…]

Source: Mozilla removes telemetry service Adjust from mobile Firefox versions – Techzine Global

Oh dear. And I thought that Mozilla was the privacy friendly option. 2 strikes now.

Australian Regulators Decide To Write A Strongly Worded Letter About Clearview’s Privacy Law Violations, leave it at that

Clearview’s status as an international pariah really hasn’t changed much over the past few years. It may be generating fewer headlines, but nothing’s really changed about the way it does business.

Clearview has spent years scraping the web, compiling as much personal info as possible to couple with the billions of photos it has collected. It sells all of this to whoever wants to buy it. In the US, this means lots and lots of cop shops. Also, in the US, Clearview has mostly avoided running into a lot of legal trouble, other than a couple of lawsuits stemming from violations of certain states’ privacy laws.

Elsewhere in the world, it’s a different story. It has amassed millions in fines and plenty of orders to exit multiple countries immediately. These orders also mandate the removal of photos and other info gathered from accounts of these countries’ residents.

It doesn’t appear Clearview has complied with many of these orders, much less paid any of the fines. Clearview’s argument has always been that it’s a US company and, therefore, isn’t subject to rulings from foreign courts or mandates from foreign governments. It also appears Clearview might not be able to pay these fines if forced to, considering it’s now offering lawsuit plaintiffs shares in the company, rather than actual cash, to fulfill its settlement obligations.

Australia is one of several countries that claimed Clearview routinely violated privacy laws. Australia is also one of several that told Clearview to get out. Clearview’s response to the allegations and mandates delivered by Australian privacy regulators was the standard boilerplate: we don’t have offices in the Australia so we’re not going to comply with your demands.

Perhaps it’s this international stalemate that has prompted the latest bit of unfortunate news on the Clearview-Australia front. The Office of the Australian Information Commissioner (OAIC) has issued a statement that basically says it’s not going to waste any more time and money trying to get Clearview to respect Australia’s privacy laws. (h/t The Conversation)

Before giving up, the OAIC has this to say about its findings:

That determination found that Clearview AI, through its collection of facial images and biometric templates from individuals in Australia using a facial recognition technology, contravened the Privacy Act, and breached several Australian Privacy Principles (APPs) in Schedule 1 of the Act, including by collecting the sensitive information of individuals without consent in breach of APP 3.3 and failing to take reasonable steps to implement practices, procedures and systems to comply with the APPs.

Notably, the determination found that Clearview AI indiscriminately collected images of individuals’ faces from publicly available sources across the internet (including social media) to store in a database on the organisation’s servers. 

This was followed by the directive ordering Clearview to stop doing business in the country and delete any data it held pertaining to Australian residents. The statement notes Clearview’s only responses were a.) challenging the order in court in 2021 and b.) withdrawing entirely from the proceedings two years later. The OAIC notes that nothing appears to have changed in terms of how Clearview handles its collections. It also says it has no reason to believe Clearview has stopped collecting Australian persons’ data.

Despite all of that, it has decided to do absolutely nothing going forward:

Privacy Commissioner Carly Kind said, “I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinising the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States. Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.

That’s disappointing. It makes it clear the company can avoid being held accountable for its legal violations by simply refusing to honor mandates issued by foreign countries or pay any fines levied. It can just continue to be the awful, ethically-horrendous company it has always been because, sooner or later, regulators are just going to give up and move on to softer targets.

[…]

Source: Australian Regulators Decide To Do Absolutely Nothing About Clearview’s Privacy Law Violations | Techdirt

Dutch officials fine Uber €290M for GDPR violations

Privacy authorities in the Netherlands have imposed a €290 million ($324 million) fine on ride-share giant Uber for sending driver data to servers in the United States – “a serious violation” of the EU’s General Data Protection Regulation (GDPR).

According to the Dutch Data Protection Authority (DPA), Uber spent years sending sensitive driver information from Europe to the US. Among the data that was transmitted were taxi licenses, location data, payment details, identity documents, and medical and criminal records. The data was sent abroad without the use of “transfer tools,” which the DPA said means the data wasn’t sufficiently protected.

“Businesses are usually obliged to take additional measures if they store personal data of Europeans outside the European Union,” Dutch DPA chairman Aleid Wolfsen said of the decision. “Uber did not meet the requirements of the GDPR to ensure the level of protection to the data with regard to transfers to the US. That is very serious.”

The Dutch DPA said that the investigation that led to the fine began after complaints from a group of more than 170 French Uber drivers who alleged their data was being sent to the US without adequate protection. Because Uber’s European operations are based in the Netherlands, enforcement for GDPR violations fell to the Dutch DPA.

Unfortunately for Uber, it already has an extensive history with the Dutch DPA, which has fined the outfit twice before.

The first came in 2018 when the authority fined Uber €600,000 for failing to report a data breach (a slugfest that several EU countries joined in on). The latter €10 million fine came earlier this year after Dutch officials determined Uber had failed to disclose data retention practices surrounding the data of EU drivers, refusing to name which countries data was sent to, and had obstructed its drivers’ right to privacy.

[…]

The uncertainty Uber refers to stems from the EU’s striking down of the EU-US Privacy Shield agreement and the years of efforts to replace it with a new rule that defines the safe transfer of personal data between the two regions.

Uber claims it’s done its job under the GDPR to safeguard data belonging to European citizens – it didn’t even need to make any data transfer process changes to comply the latest rules.

[…]

Source: Dutch officials fine Uber €290M for GDPR violations • The Register

Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies

Last year Mozilla released a report showcasing how the auto industry has some of the worst privacy practices of any tech industry in America (no small feat). Massive amounts of driver behavior is collected by your car, and even more is hoovered up from your smartphone every time you connect. This data isn’t secured, often isn’t encrypted, and is sold to a long list of dodgy, unregulated middlemen.

Last March the New York Times revealed that automakers like GM routinely sell access to driver behavior data to insurance companies, which then use that data to justify jacking up your rates. The practice isn’t clearly disclosed to consumers, and has resulted in 11 federal lawsuits in less than a month.

Now Texas AG Ken Paxton has belatedly joined the fun, filing suit (press release, complaint) in the state district court of Montgomery County against GM for “false, deceptive, and misleading business practices”:

“Companies are using invasive technology to violate the rights of our citizens in unthinkable ways. Millions of American drivers wanted to buy a car, not a comprehensive surveillance system that unlawfully records information about every drive they take and sells their data to any company willing to pay for it.”

Paxton notes that GM’s tracking impacted 1.8 million Texans and 14 million vehicles, few if any of whom understood they were signing up to be spied on by their vehicle. This is, amazingly enough, the first state lawsuit against an automaker for privacy violations, according to Politico.

The sales pitch for this kind of tracking and sales is that good drivers will be rewarded for more careful driving. But as publicly-traded companies, everybody in this chain — from insurance companies to automakers — are utterly financially desensitized from giving anybody a consistent break for good behavior. That’s just not how it’s going to work. Everybody pays more and more. Always.

But GM and other automakers’ primary problem is they weren’t telling consumers this kind of tracking was even happening in any clear, direct way. Usually it’s buried deep in an unread end user agreement for roadside assistant apps and related services. Those services usually involve a free trial, but the user agreement to data collection sticks around.

[…]

Source: Texas AG Latest To Sue GM For Covertly Selling Driver Data To Insurance Companies | Techdirt

UN Cybercrime Treaty does not define cybercrime, allows any definition and forces all signatories to secretly surveil their own population on request by any other signatory (think totalitarian states spying on people in democracies with no recourse)

[…] EFF colleague, Katitza Rodriguez, about the Cybercrime Treaty, which is about to pass, and which is, to put it mildly, terrifying:

https://www.eff.org/deeplinks/2024/07/un-cybercrime-draft-convention-dangerously-expands-state-surveillance-powers

Look, cybercrime is a real thing, from pig butchering to ransomware, and there’s real, global harms that can be attributed to it. Cybercrime is transnational, making it hard for cops in any one jurisdiction to handle it. So there’s a reason to think about formal international standards for fighting cybercrime.

But that’s not what’s in the Cybercrime Treaty.

Here’s a quick sketch of the significant defects in the Cybercrime Treaty.

The treaty has an extremely loose definition of cybercrime, and that looseness is deliberate. In authoritarian states like China and Russia (whose delegations are the driving force behind this treaty), “cybercrime” has come to mean “anything the government disfavors, if you do it with a computer.” “Cybercrime” can mean online criticism of the government, or professions of religious belief, or material supporting LGBTQ rights.

Nations that sign up to the Cybercrime Treaty will be obliged to help other nations fight “cybercrime” – however those nations define it. They’ll be required to provide surveillance data – for example, by forcing online services within their borders to cough up their users’ private data, or even to pressure employees to install back-doors in their systems for ongoing monitoring.

These obligations to aid in surveillance are mandatory, but much of the Cybercrime Treaty is optional. What’s optional? The human rights safeguards. Member states “should” or “may” create standards for legality, necessity, proportionality, non-discrimination, and legitimate purpose. But even if they do, the treaty can oblige them to assist in surveillance orders that originate with other states that decided not to create these standards.

When that happens, the citizens of the affected states may never find out about it. There are eight articles in the treaty that establish obligations for indefinite secrecy regarding surveillance undertaken on behalf of other signatories. That means that your government may be asked to spy on you and the people you love, they may order employees of tech companies to backdoor your account and devices, and that fact will remain secret forever. Forget challenging these sneak-and-peek orders in court – you won’t even know about them:

https://www.eff.org/deeplinks/2024/06/un-cybercrime-draft-convention-blank-check-unchecked-surveillance-abuses

Now here’s the kicker: while this treaty creates broad powers to fight things governments dislike, simply by branding them “cybercrime,” it actually undermines the fight against cybercrime itself. Most cybercrime involves exploiting security defects in devices and services – think of ransomware attacks – and the Cybercrime Treaty endangers the security researchers who point out these defects, creating grave criminal liability for the people we rely on to warn us when the tech vendors we rely upon have put us at risk.

[…]

When it comes to warnings about the defects in their own products, corporations have an irreconcilable conflict of interest. Time and again, we’ve seen corporations rationalize their way into suppressing or ignoring bug reports. Sometimes, they simply delay the warning until they’ve concluded a merger or secured a board vote on executive compensation.

Sometimes, they decide that a bug is really a feature

Note: Responsible disclosure is something people should really “get” by now.

[…]

The idea that users are safer when bugs are kept secret is called “security through obscurity” and no one believes in it – except corporate executives

[…]

The spy agencies have an official doctrine defending this reckless practice: they call it “NOBUS,” which stands for “No One But Us.” As in: “No one but us is smart enough to find these bugs, so we can keep them secret and use them attack our adversaries, without worrying about those adversaries using them to attack the people we are sworn to protect.”

NOBUS is empirically wrong.

[…]

The leak of these cyberweapons didn’t just provide raw material for the world’s cybercriminals, it also provided data for researchers. A study of CIA and NSA NOBUS defects found that there was a one-in-five chance of a bug that had been hoarded by a spy agency being independently discovered by a criminal, weaponized, and released into the wild.

[…]

A Cybercrime Treaty is a good idea, and even this Cybercrime Treaty could be salvaged. The member-states have it in their power to accept proposed revisions that would protect human rights and security researchers, narrow the definition of “cybercrime,” and mandate transparency. They could establish member states’ powers to refuse illegitimate requests from other countries:

https://www.eff.org/press/releases/media-briefing-eff-partners-warn-un-member-states-are-poised-approve-dangerou

 

Source: Pluralistic: Holy CRAP the UN Cybercrime Treaty is a nightmare (23 Jul 2024) – Pluralistic: Daily links from Cory Doctorow

Google isn’t killing third-party cookies in Chrome after all in move that surprises absolutely no-one.

Google won’t kill third-party cookies in Chrome after all, the company said on Monday. Instead, it will introduce a new experience in the browser that will allow users to make informed choices about their web browsing preferences, Google announced in a blog post. Killing cookies, Google said, would adversely impact online publishers and advertisers. This announcement marks a significant shift from Google’s previous plans to phase out third-party cookies by early 2025.

[…]

Google will now focus on giving users more control over their browsing data, Chavez wrote. This includes additional privacy controls like IP Protection in Chrome’s Incognito mode and ongoing improvements to Privacy Sandbox APIs.

Google’s decision provides a reprieve for advertisers and publishers who rely on cookies to target ads and measure performance. Over the past few years, the company’s plans to eliminate third-party cookies have been riding on a rollercoaster of delays and regulatory hurdles. Initially, Google aimed to phase out these cookies by the end of 2022, but the deadline was pushed to late 2024 and then to early 2025 due to various challenges and feedback from stakeholders, including advertisers, publishers, and regulatory bodies like the UK’s Competition and Markets Authority (CMA).

In January 2024, Google began rolling out a new feature called Tracking Protection, which restricts third-party cookies by default for 1% of Chrome users globally. This move was perceived as the first step towards killing cookies completely. However, concerns and criticism about the readiness and effectiveness of Google’s Privacy Sandbox, a collection of APIs designed to replace third-party cookies, prompted further delays.

The CMA and other regulatory bodies have expressed concerns about Google’s Privacy Sandbox, fearing it might limit competition and give Google an unfair advantage in the digital advertising market. These concerns have led to extended review periods and additional scrutiny, complicating Google’s timeline for phasing out third-party cookies. Shortly after Google’s Monday announcement, the CMA said that it was “considering the impact” of Google’s change of direction.

Source: Google isn’t killing third-party cookies in Chrome after all

Firefox’s New ‘Privacy’ Feature Actually Gives Your Data to Advertisers – How and Why to Disable Firefox’s ‘Privacy-Preserving’ Ad Measurements

Firefox finds itself in a tricky position at times, because it wants to be a privacy friendly browser, but most of its funding comes from Google, whose entire business is advertising. With Firefox 128, the browser has introduced ‘privacy-preserving ad measurement,’ which is enabled by default. Despite the name, the actual implications of the feature has users upset.

What ‘privacy-preserving ad measurement’ means

In a blog post, Firefox’s parent company Mozilla has explained that this new feature is an experiment designed to shape a web standard for advertisers, one that relies less on cookies but still tracks you in some way. Mozilla says privacy-preserving ad measurement is only being used by a handful of sites at the moment, in order to tell if their ads were successful or not.

[…]

ith privacy-preserving ad measurement, sites will be able to ask Firefox if people clicked on an ad, and if they ended up doing something the ad wanted them to (such as buying a product). Firefox doesn’t give this data directly to advertisers, but encrypts it, aggregates it, and submits it anonymously. This means that your browsing activity and other data about you is hidden from the advertiser, but they can see if their campaign delivered results or not. It’s a similar feature to those in Chrome’s Privacy Sandbox, although Google itself has run into regulatory issues implementing them.

Why you should disable this feature

Even though Mozilla’s intentions appear to be genuine, this feature should never have been enabled by default, as no matter its label, it still does technically give advertisers your data. When advertisers started tracking people online, there were no privacy protections, laws, or standards to follow, and the industry chose to track all the data that it could lay its hands on. No one ever asked users if they wanted to be tracked, or if they wanted to give advertisers access to their location, browser data, or personal preferences. If I’ve learned one thing from the way the online ad industry evolved, it’s that people should have a choice in whether their data is being tracked. Even if it seeks to replace even more invasive systems, Firefox should have offered people a choice to opt into ad measurement, instead of enabling it silently

[…]

To disable privacy-preserving ad measurement in Firefox 128, click the three-lines icon in the top-right corner in the browser. Then, go to Settings > Privacy & Security and scroll down to the Website Advertising Preferences section. There, disable Allow websites to perform privacy-preserving ad measurement.

Source: How and Why to Disable Firefox’s ‘Privacy-Preserving’ Ad Measurements | Lifehacker

Only 5 years too late: British regulators to examine Big Tech’s digital wallets – and where is the EU?

British regulators said on Monday they were looking into the soaring use of digital wallets offered by Big Tech firms, including whether there are any competition, consumer protection or market integrity concerns.
The Financial Conduct Authority and Payments Systems Regulator is seeking views on the benefits and risks, and will assess the impact digital wallets, such as Apple Pay, Google Pay and PayPal, have on competition and choice of payment options at checkout, among other things.
Digital wallets are now likely used by more than half of UK adults and have become “an increasingly important touchpoint” between Big Tech companies and UK consumers, they said in a statement.
“Digital wallets are steadily becoming a go-to payment type and while this presents exciting opportunities, there might be risks too,” said David Geale, the PSR’s managing director.
Nikhil Rathi, the FCA’s chief executive, said the growth of digital wallets represented a “seismic shift” in how people pay and regulators wanted to maximise the opportunities while “protecting against any risks this technology may present.”
Regulators and lawmakers in Europe and the United States have been examining the growing role of Big Tech in financial services.
The U.S. consumer watchdog last year proposed regulating payments and smartphone wallets, prompting criticism from the industry.
The British regulators said their review of digital wallets built on their previous work on contactless mobile payments and on the role of Big Tech firms in financial services.
After considering all feedback, the regulators provide an update on Big Tech and digital wallets by the first quarter of 2025.

Source: British regulators to examine Big Tech’s digital wallets | Reuters

Considering that people using the services generally don’t understand that they are giving their payment history to the big tech company that runs it – and is not a bank – this is way way way too late.

Dutch DPA gets off its’ ass, Fine of 600,000 euros for tracking cookies on Kruidvat.nl – detected in 2020

The Dutch Data Protection Authority (AP) has imposed a fine of 600,000 euros on the company behind the Kruidvat drugstore. Kruidvat.nl followed consumers with tracking cookies, without their knowledge or permission. AS Watson collected and used sensitive personal data from millions of website visitors against the rules.

The company behind Kruidvat collected data from website visitors and was able to create personal profiles. In addition to visitors’ location data, this included which pages they visited, which products they added to the shopping cart and purchased and which recommendations they clicked on.

That is very sensitive information, AP points out, due to the specific nature of drugstore products. Such as pregnancy tests, contraceptives or medication for all kinds of ailments. That sensitive information, linked to the location (which may be traceable via the IP address) of the unique visitor, can sketch a very specific and invasive profile of the people who visit Kruidvat.nl.

Kruidvat.nl should have asked permission to place tracking cookies on visitors’ computers. The GDPR privacy law sets a number of requirements for valid consent. These requirements are that consent must be given freely, for specific processing of personal data, on the basis of sufficient information and that there must be no doubt that consent has been given.

In the cookie banner on Kruidvat.nl, the boxes to agree to the installation of tracking software were checked by default. That’s not allowed. Visitors who still wanted to refuse the cookies had to go through many steps to achieve this. The AP has found that personal data of website visitors to Kruidvat.nl have been processed unlawfully.

At the end of 2019, the AP started an investigation into various websites, including Kruidvat.nl. The AP tested whether these websites met the requirements for placing (tracking) cookies. The AP checked whether permission for tracking cookies was asked from website visitors and, if so, how exactly this happened.

Kruidvat.nl was found not to comply in April 2020, after which the AP sent the company a letter. In 2020, the AP found that Kruidvat.nl was still not in order. The AP then started investigating this website further. This violation ended in October 2020.

There is increasing social irritation about cookies and cookie notifications, ranging from annoying and misleading banners to concerns about the secret tracking of internet users. In 2024, the AP will check more often whether websites correctly request permission for tracking cookies or other tracking software.

Source: Boete van 600.000 euro voor tracking cookies op Kruidvat.nl – Emerce

WTFBBQ?! Firefox Starts collecting personal ad preferences

In a world where so much of our lives depend on the use of online services, the web browser used to access those services becomes of crucial importance. It becomes a question of whether we trust the huge corporate interests which control this software with such access to our daily lives, and it is vital that the browser world remains a playing field with many players in the game.

The mantle has traditionally fallen upon Mozilla’s Firefox browser to represent freedom from corporate ownership, but over the last couple of years even they have edged away from their open source ethos and morphed into an advertising company that happens to have a browser. We’re asking you: can we still trust Mozilla’s Firefox, when the latest version turns on ad measurement by default?

Such has been the dominance of Google’s Chromium in the browser world, that it becomes difficult to find alternatives which aren’t based on it. We can see the attraction for developers, instead of pursuing the extremely hard task of developing a new browser engine, just use one off-the-shelf upon which someone else has already done the work. As a result, once you have discounted browsers such as the venerable Netsurf or Dillo which are cool as heck but relatively useless for modern websites, the choices quickly descend into the esoteric. There are Ladybird and Servo which are both promising but still too rough around the edges for everyday use, so what’s left? Probably LibreWolf represents the best option, a version of Firefox with a focus on privacy and security.

[…]

Source: Ask Hackaday: Has Firefox Finally Gone Too Far? | Hackaday

Many comments in the thread in the source. Definitely worth looking at.

Apple settles EU case by opening its iPhone payment system to rivals

The EU on Thursday accepted Apple’s pledge to open its “tap to pay” iPhone payment system to rivals as a way to resolve an antitrust case and head off a potentially hefty fine.

The European Commission, the EU’s executive arm and top antitrust enforcer, said it approved the commitments that Apple offered earlier this year and will make them legally binding.

Regulators had accused Apple in 2022 of abusing its dominant position by limiting access to its mobile payment technology.

Apple responded by proposing in January to allow third-party mobile wallet and payment service providers access to the contactless payment function in its iOS operating system. After Apple tweaked its proposals following testing and feedback, the commission said those “final commitments” would address its competition concerns.

“Today’s commitments end our Apple Pay investigation,” Margrethe Vestager, the commission’s executive vice-president for competition policy, told a press briefing in Brussels. “The commitments bring important changes to how Apple operates in Europe to the benefit of competitors and customers.”

Apple said in a prepared statement that it is “providing developers in the European Economic Area with an option to enable NFC [near-field communication] contactless payments and contactless transactions” for uses like car keys, corporate badges, hotel keys and concert tickets.

[…]

The EU deal promises more choice for Europeans. Vestager said iPhone users will be able to set a default wallet of their choice while mobile wallet developers will be able to use important iPhone verification functions like Face ID.

[…]

Analysts said there would be big financial incentives for companies to use their own wallets rather than letting Apple act as the middleman, resulting in savings that could trickle down to consumers. Apple charges banks 0.15% for each credit card transaction that goes through Apple Pay, according to the justice department’s lawsuit.

Apple must open up its payment system in the EU’s 27 countries plus Iceland, Norway and Liechtenstein by 25 July.

“As of this date, developers will be able to offer a mobile wallet on the iPhone with the same ‘tap-and-go’ experience that so far has been reserved for Apple Pay,” Vestager said. The changes will remain in force for a decade and will be monitored by a trustee.

Breaches of EU competition law can draw fines worth up to 10% of a company’s annual global revenue, which in Apple’s case could have amounted to tens of billions of euros.

“The main advantage to the issuer bank of supporting an alternative to Apple Pay via iPhone is the reduction in fees incurred, which can be substantial,” said Philip Benton, a principal analyst at research and advisory firm Omdia. To encourage iPhone users to switch away from Apple Pay to another mobile wallet, “the fee reduction needs to be partially passed onto the consumer” through benefits like cashback or loyalty rewards, he said.

Banks and consumers could also benefit in other ways.

If companies use their own apps for tap-and-go payments, they would get “full visibility” of their customers’ transactions, said Ben Wood, chief analyst at CCS Insight. That data would allow them to “build brand loyalty and trust and offer more personalised services, rewards and promotions directly to the user”, he said.

Source: Apple settles EU case by opening its iPhone payment system to rivals | Apple | The Guardian

Note: Currently, Apple has this full visibility of your transactions. Are you sure you want to trust a company like that with your financial data?

I wonder how childishly Apple will handle this, considering how it has gone about “opening up” it’s app store and allowing home screen apps (not really at all)

Why all Chromium browsers tell Google about your CPU, GPU usage? A whitewashing bullshit explanation.

Running a Chromium-based browser, such as Google Chrome or Microsoft Edge? The chances are good it’s quietly telling Google all about your CPU and GPU usage when you visit one of the search giant’s websites.

The feature is, from what we can tell, for performance monitoring and not really for tracking – Google knows who you are and what you’re doing anyway when you’re logged into and using its sites – but it does raise some antitrust concerns in light of Europe’s competition-fostering Digital Markets Act (DMA).

When visiting a *.google.com domain, the Google site can use the API to query the real-time CPU, GPU, and memory usage of your browser, as well as info about the processor you’re using, so that whatever service is being provided – such as video-conferencing with Google Meet – could, for instance, be optimized and tweaked so that it doesn’t overly tax your computer. The functionality is implemented as an API provided by an extension baked into Chromium – the browser brains primarily developed by Google and used in Chrome, Edge, Opera, Brave, and others.

Non-Chromium-based browsers – such as Mozilla’s Firefox – don’t have that extension, which puts them at a potential disadvantage. Without the API, they may offer a worse experience on Google sites than what’s possible on the same hardware with Google’s own browser, because they can’t provide that live performance info.

There is, however, nothing technically stopping Moz or other browser-engine makers implementing a similar extension itself in Firefox, if they so chose.

Crucially though, websites that compete against Google can’t access the Chromium API. This is where technical solutions start to look potentially iffy in the eyes of Europe’s DMA.

Netherlands-based developer Luca Casonato highlighted the extension’s existence this week on social media, and his findings went viral – with millions of views. We understand at least some people have known about the code for a while now – indeed, it’s all open source and can be found here in the preinstalled extension hangout_services.

That name should give you a clue to its origin. It was developed last decade to provide browser-side functionality to Google Hangouts – a product that got split into today’s Google Meet and Chat. Part of that functionality is logging for Google, upon request, stats about your browser’s use of your machine’s compute resources when visiting a *.google.com domain – such as meet.google.com.

Casonato noted that the extension can’t be disabled in Chrome, at least, and it doesn’t show up in the extension panel. He observed it’s also included in Microsoft Edge and Brave, both of which are Chromium based. We reached out to Casonato for more of his thoughts on this – though given the time differences between him in Europe and your humble vulture in the US, we didn’t immediately hear back.

Explanation

If you’ve read this far there’s probably an obvious question on your mind: What’s to say this API is malicious? We’re not saying that, and neither is Casonato. Google isn’t saying that either.

“Today, we primarily use this extension for two things: To improve the user experience by optimizing configurations for video and audio performance based on system capabilities [and] provide crash and performance issue reporting data to help Google services detect, debug, and mitigate user issues,” a Google spokesperson told us on Thursday.

“Both are important for the user experience and in both cases we follow robust data handling practices designed to safeguard user privacy,” the spokesperson added.

As we understand it, Google Meet today uses the old Hangouts extension to, for one thing, vary the quality of the video stream if the current resolution is proving too much for your PC. Other Google sites are welcome to use the thing, too.

That all said, the extension’s existence could be harmful to competition as far as the EU is concerned – and that seems to be why Casonato pointed it out this week.

Source: Why Chromium tells Google sites about your CPU, GPU usage • The Register

A lovely explanation, but the fact remains that chromium is sending personal information to a central company: Google, without informing users at all. This blanket explanation could be used to whitewash any information they send through Chromium: the contents of your memory? Improving user experience. The position of your mouse on websites? Improving user experience. It just does not wash.

Data breach exposes millions of mSpy spyware customer support tickets

Unknown attackers stole millions of customer support tickets, including personal information, emails to support, and attachments, including personal documents, from mSpy in May 2024. While hacks of spyware purveyors are becoming increasingly common, they remain notable because of the highly sensitive personal information often included in the data, in this case about the customers who use the service.

The hack encompassed customer service records dating back to 2014, which were stolen from the spyware maker’s Zendesk-powered customer support system.

mSpy is a phone surveillance app that promotes itself as a way to track children or monitor employees. Like most spyware, it is also widely used to monitor people without their consent. These kinds of apps are also known as “stalkerware” because people in romantic relationships often use them to surveil their partner without consent or permission.

The mSpy app allows whoever planted the spyware, typically someone who previously had physical access to a victim’s phone, to remotely view the phone’s contents in real-time.

As is common with phone spyware, mSpy’s customer records include emails from people seeking help to surreptitiously track the phones of their partners, relatives, or children, according to TechCrunch’s review of the data, which we independently obtained. Some of those emails and messages include requests for customer support from several senior-ranking U.S. military personnel, a serving U.S. federal appeals court judge, a U.S. government department’s watchdog, and an Arkansas county sheriff’s office seeking a free license to trial the app.

Even after amassing several million customer service tickets, the leaked Zendesk data is thought to represent only the portion of mSpy’s overall customer base who reached out for customer support. The number of mSpy customers is likely to be far higher.

Yet more than a month after the breach, mSpy’s owners, a Ukraine-based company called Brainstack, have not acknowledged or publicly disclosed the breach.

Troy Hunt, who runs data breach notification site Have I Been Pwned, obtained a copy of the full leaked dataset, adding about 2.4 million unique email addresses of mSpy customers to his site’s catalog of past data breaches.

[…]

Some of the email addresses belong to unwitting victims who were targeted by an mSpy customer. The data also shows that some journalists contacted the company for comment following the company’s last known breach in 2018. And, on several occasions, U.S. law enforcement agents filed or sought to file subpoenas and legal demands with mSpy. In one case following a brief email exchange, an mSpy representative provided the billing and address information about an mSpy customer — an alleged criminal suspect in a kidnapping and homicide case — to an FBI agent.

Each ticket in the dataset contained an array of information about the people contacting mSpy. In many cases, the data also included their approximate location based on the IP address of the sender’s device.

[…]

The emails in the leaked Zendesk data show that mSpy and its operators are acutely aware of what customers use the spyware for, including monitoring of phones without the person’s knowledge. Some of the requests cite customers asking how to remove mSpy from their partner’s phone after their spouse found out. The dataset also raises questions about the use of mSpy by U.S. government officials and agencies, police departments, and the judiciary, as it is unclear if any use of the spyware followed a legal process.

[…]

This is the third known mSpy data breach since the company began in around 2010. mSpy is one of the longest-running phone spyware operations, which is in part how it accumulated so many customers.

[…]

the data breach of mSpy’s Zendesk data exposed its parent company as a Ukrainian tech company called Brainstack.

[…]

Source: Data breach exposes millions of mSpy spyware customers | TechCrunch

Why You Should Consider Proton Docs Over Google

Proton has officially launched Docs in Proton Drive, a new web-based productivity app that gives you access to a fully-featured text editor with shared editing capabilities and full end-to-end encryption. It’s meant to take on Google Docs—one of the leading online word processors in the world, and make it more convenient to use Proton’s storage service. But how exactly does Proton’s document editor compare to Google’s? Here’s what you need to know.

Docs in Proton Drive has a familiar face

On the surface, Docs in Proton Drive—or Proton Docs as some folks have begun calling it for simplicity’s sake—looks just like Google Docs. And that’s to be expected. Text editors don’t have much reason to stray from the same basic “white page with a bunch of toolbars” look, and they all offer the same types of tools like headlines, bullet points, font changes, highlighting, etc.

[…]

The difference isn’t in the app itself

[…]

Proton has built its entire business around the motto of “privacy first,” and that extends to the company’s latest software offerings, too. Docs in Proton Drive includes complete end-to-end encryption—down to your cursor movements—which means nobody, not even Proton, can track what you’re doing in your documents. They’re locked down before they even reach Proton’s servers.

This makes the product very enticing for businesses that might want to keep their work as private as possible while also still having the same functionality as Google Docs—because Proton isn’t missing any of the functionality that Google Docs offers, aside from the way that Google Docs integrates with the rest of the Google Suite of products.

That’s not to say that Google isn’t secure. Google does utilize its own level of encryption when storing your data in the cloud. However, it isn’t completely end-to-end encrypted, so Google has open access to your data. Google says it only trains its generative AI on “publicly accessible” information, and while that probably won’t affect most people, it is a pain point for many, especially as the company does make exceptions for features like Smart Compose.

That worry is why products with end-to-end encryption have become such a commodity in recent years—especially as cybersecurity risks continue to rise, meaning you have to trust the companies who store your data even more. Proton’s advantage is that it promises to NEVER use your content for any purpose—and those aren’t empty words. Because the company doesn’t have access to your content, it couldn’t use it even if it wanted to.

[…]

Source: Why You Should Consider Proton Docs Over Google | Lifehacker

Proton Docs is a privacy-focused answer to Google Docs and Microsoft Word

Proton Docs looks a lot like Google Docs: white pages, formatting toolbar at the top, live indicators showing who’s in the doc with their name attached to a cursor, the whole deal. That’s not especially surprising, for a couple of reasons. First, Google Docs is hugely popular, and there are only so many ways to style a document editor anyway. Second, Proton Docs exists in large part to be all the things that are great about Google Docs — just without Google in the mix.

Docs is launching today inside of Proton Drive, as the latest app in Proton’s privacy-focused suite of work tools. The company that started as an email client now also includes a calendar, a file storage system, a password manager, and more. Adding Docs to the ecosystem makes sense for Proton as it tries to compete with Microsoft Office and Google Workspace and seemed to be clearly coming soon after Proton acquired Standard Notes in April. Standard Notes isn’t going away, though, Proton PR manager Will Moore tells me — it’s just that Docs is borrowing some features.

The first version of Proton Docs seems to have most of what you’d expect in a document editor: rich text options, real-time collaborative editing, and multimedia support. (If Proton can handle image embeds better than Google, it might have a hit on its hands just for that.) It’s web-only and desktop-optimized for now, though Moore tells me it’ll eventually come to other platforms. “Everything that Google’s got is on our roadmap,” he says.

A screenshot of multiple editors in Proton Docs.
Imagine Google Docs… there, that’s it. You know what Proton Docs looks like.Image: Proton

Since this is a Proton product, security is everything: the company says every document, keystroke, and even cursor movement is end-to-end encrypted in real time. Proton has long promised to never sell or otherwise use your user data

[…]

Source: Proton Docs is a privacy-focused answer to Google Docs and Microsoft Word – The Verge

Spain introduces porn passport – really wants to know what you are watching and especially how often erm… no… *cough* to stop kids from watching smut

The Spanish government has a plan to prevent kids from watching porn online: Meet the porn passport.

Officially (and drily) called the Digital Wallet Beta (Cartera Digital Beta), the app Madrid unveiled on Monday would allow internet platforms to check whether a prospective smut-watcher is over 18. Porn-viewers will be asked to use the app to verify their age. Once verified, they’ll receive 30 generated “porn credits” with a one-month validity granting them access to adult content. Enthusiasts will be able to request extra credits.

While the tool has been criticized for its complexity, the government says the credit-based model is more privacy-friendly, ensuring that users’ online activities are not easily traceable.

The system will be available by the end of the summer. It will be voluntary, as online platforms can rely on other age-verification methods to screen out inappropriate viewers. It heralds an EU law going into force in October 2027, which will require websites to stop minors from accessing porn.

Eventually, Madrid’s porn passport is likely to be replaced by the EU’s very own digital identity system (eIDAS2) — a so-called wallet app allowing people to access a smorgasbord of public and private services across the whole bloc.

“We are acting in advance and we are asking platforms to do so too, as what is at stake requires it,” José Luis Escrivá, Spain’s digital secretary, told Spanish newspaper El País.

Source: Spain introduces porn passport to stop kids from watching smut – POLITICO

Every time they mention kids, have a really good look at how much more they are spying on you and controlling your actions.

EU’s ‘Going Dark’ Expert Group Publishes 42-Point Surveillance Plan For Access To All Devices And Data At All Times

Techdirt has been covering the disgraceful attempts by the EU to break end-to-end encryption — supposedly in order to “protect the children” — for two years now. An important vote that could have seen EU nations back the proposal was due to take place recently. The vote was cancelled — not because politicians finally came to their senses, but the opposite. Those backing the new law were worried the latest draft might not be approved, and so removed it from the agenda, to allow a little more backroom persuasion to be applied to holdouts.

Although this “chat control” law has been the main focus of the EU’s push for more surveillance of innocent citizens, it is by no means the end of it. As the German digital rights site Netzpolitik reports, work is already underway on further measures, this time to address the non-existent “going dark” threat to law enforcement:

The group of high-level experts had been meeting since last year to tackle the so-called „going dark“ problem. The High-Level Group set up by the EU was characterized by a bias right from the start: The committee is primarily made up of representatives of security authorities and therefore represents their perspective on the issue.

Given the background and bias of the expert group, it’s no surprise that its report, “Recommendations from the High-Level Group on Access to Data for Effective Law Enforcement”, is a wish-list of just about every surveillance method. The Pirate Party Member of the European Parliament Patrick Breyer has a good summary of what the “going dark” group wants:

according to the 42-point surveillance plan, manufacturers are to be legally obliged to make digital devices such as smartphones, smart homes, IoT devices, and cars monitorable at all times (“access by design”). Messenger services that were previously securely encrypted are to be forced to allow for interception. Data retention, which was overturned by the EU Court of Justice, is to be reenacted and extended to OTT internet communications services such as messenger services. “At the very least”, IP connection data retention is to be required to be able to track all internet activities. The secure encryption of metadata and subscriber data is to be prohibited. Where requested by the police, GPS location tracking should be activated by service providers (“tracking switch”). Uncooperative providers are to be threatened with prison sentences.

It’s an astonishing list, not least for the re-appearance of data retention, which was thrown out by the EU’s highest court in 2014. It’s a useful reminder that even when bad laws are overturned, constant vigilance is required to ensure that they don’t come back at a later date.

Source: EU’s ‘Going Dark’ Expert Group Publishes 42-Point Surveillance Plan For Access To All Devices And Data At All Times | Techdirt

These people don’t seem to realise that opening this stuff up for law enforcement (who do misuse their powers), also opens it up to criminals.