The Linkielist

Linking ideas with the world

The Linkielist

Facebook revenue chief says ad-supported model is ‘under assault’ – boo hoo, turns out people like their privacy

Facebook Chief Revenue Officer David Fischer said Tuesday that the economic models that rely on personalized advertising are “under assault” as Apple readies a change that would limit the ability of Facebook and other companies to target ads and estimate how well they work.

The change to Apple’s identifier for advertisers, or IDFA, will give iPhone users the option to block tracking when opening an app. It was originally planned for iOS 14, the version of the iPhone operating system that was released last month. But Apple said last month it was delaying the rollout until 2021 “to give developers time to make necessary changes.”

Fischer, speaking at a virtual Advertising Week session Tuesday, spoke about the changes after being asked about Facebook’s vulnerability to the companies that control mobile platforms, such as Apple and Google, which runs Android.

Fischer argued that though there’s “angst and concern” about the risks of technology, personalized and targeted advertising has been essential to help the internet grow.

“The economic model that not just we at Facebook but so many businesses rely on, this model is worth preserving, one that makes content freely available, and the business that makes it run and hum, is via advertising,” he said.

“And right now, frankly, some of that is under assault, that the very tools that entrepreneurs, that businesses are relying on right now are being threatened. To me, the changes that Apple has proposed, pretty sweeping changes, are going to hurt developers and businesses the most.”

Apple frames the change as preserving users’ privacy, rather than as an attack on the advertising industry, and has been promoting its privacy features as a core reason to get an iPhone. It comes as consumers are increasingly wary about their online privacy following scandals with various companies, including Facebook.

[…]

Source: Facebook revenue chief says ad-supported model is ‘under assault’

Who watches the watchers? Samsung does so it can fling ads at owners of its smart TVs

Samsung brags to advertisers that “first screen ads”, seen by all users of its Smart TVs when they turn on, are 100 per cent viewable, audience targeted, and seen 400 times per TV per month. Some users are not happy.

“Dear Samsung, why are you showing Ads on my Smart TV without my consent? I didn’t agree to this in the privacy settings but I keep on getting this, why?” said a user on Samsung’s TV forum, adding last week that “there is no mention of advertising on any of their brand new boxes”.

As noted by TV site flatpanelshd, a visit to Samsung’s site pitching to advertisers is eye-opening. It is not just that the ads appear, but also that the company continually profiles its customers, using a technology called Automatic Content Recognition (ACR), which works by detecting what kind of content a viewer is watching.

Samsung’s Tom Focetta, VP Ad Sales and Operations in the US, said in an interview: “Our platform is built on the largest source of TV data from more than 50 million smart TVs. And we have amassed over 60 per cent of the US ACR footprint.” Focetta added that ACR data is “not sold, rented or distributed” but used exclusively by Samsung to target advertising.

The first screen ad unit was introduced five years ago, Focetta explained, and the company has since “added video, different types of target audience engagement, different ways to execute in terms of tactics like audience takeovers, roadblocks”. A “roadblock” is defined as “100 per cent ownership of first screen ad impressions across all Samsung TVs”. According to a Samsung support, quoted by flatpanelshd: “In general, the banner cannot be deactivated in the Smart Hub.”

Advertising does not stop there since Samsung also offers TV Plus, “a free ad-supported TV service”. Viewers are familiar with this deal, though, since ad-supported broadcasting is long established. What perturbs them is that when spending a large sum of money on TV hardware, they were unknowingly agreeing to advertising baked into its operating menu, every time they switch on.

The advent of internet-connected TVs means that viewers now divide their time between traditional TV delivered by cable or over the air, and streaming content, with an increasing share going to streaming. Viewers who have cancelled subscription TV services in favour of streaming are known as cord-cutters.

Even viewers who have chosen to watch only ad-free content do not escape. “30 per cent of streamers spend all of their streaming time in non-ad supported apps. This, however, does not mean ‘The Lost 30’ are unreachable,” said Samsung in a paper.

[…]

Source: Who watches the watchers? Samsung does so it can fling ads at owners of its smart TVs • The Register

Blowback Time: China Says TikTok Deal Is A Model For How It Should Deal With US Companies In China

We’ve already covered what a ridiculous, pathetic grift the Oracle/TikTok deal was. Despite it being premised on a “national security threat” from China, because the app might share some data (all of which is easily buyable from data brokers) with Chinese officials, the final deal cured none of that, left the Chinese firm ByteDance with 80% ownership of TikTok, and gave Trump supporters at Oracle a fat contract — and allowed Trump to pretend he did something.

Of course, what he really did was hand China a huge gift. In response to the deal, state media in China is now highlighting how the Chinese government can use this deal as a model for the Chinese to force the restructuring of US tech companies, and force the data to be controlled by local companies in China. This is from the editor-in-chief of The Global Times, a Chinese, state-sponsored newspaper:

That says:

The US restructuring of TikTok’s stake and actual control should be used as a model and promoted globally. Overseas operation of companies such as Google, Facebook shall all undergo such restructure and be under actual control of local companies for security concerns.

So, beyond doing absolutely nothing to solve the “problem” that politicians in the US laid out, the deal works in reverse. It’s given justification for China to mess with American companies in the same way, and push to expose more data to the Chinese government.

Great work, Trump. Hell of a deal.

Meanwhile, the same Twitter feed says that it’s expected that officials in Beijing are going to reject the deal from their end, and seek to negotiate one even more favorable to China’s “national security interests and dignity.”

So, beyond everything else, Trump’s “deal” has probably done more to help China, and harm data privacy and protection, while also handing China a justification playbook to do so: “See, we’re just following your lead!”

Source: Blowback Time: China Says TikTok Deal Is A Model For How It Should Deal With US Companies In China | Techdirt

Spain’s highway agency is monitoring speeding hotspots using bulk phone location data – is that even allowed here?

Spain’s highways agency is using bulk mobile phone data for monitoring speeding hotspots, according to local reports.

Equipped with data on customers handed over by local mobile phone operators, Spain’s Directorate-General for Traffic (DGT) may be gathering data on “which roads and at what specific kilometer points the speed limits are usually exceeded,” according to Granadan newspaper Ideal (en español).

“In fact, Traffic has data on this since the end of last year when the National Statistics Institution (INE) reached an agreement with mobile operators to obtain information about the movements of citizens,” reported the paper.

The data-harvesting agreement was first signed late last year to coincide with a national census (as El Reg reported at the time) and is now being used to monitor drivers’ speeds.

National newspaper El Pais reported in October 2019 that the trial would involve dividing Spain “into 3,500 cells with a minimum of 5,000 people in each of them” with the locations of phones being sampled continuously between 9am and 6pm, with further location snapshots being taken at 12am and 6am.

The newspaper explained: “With this information it will be possible to know how many citizens move from a dormitory municipality to a city; how many people work in the same neighbourhood where you live or in a different one; where do the people who work in an area come from, or how the population fluctuates in a box throughout the day.”

The INE insisted that data collected back then had been anonymised and was “aimed at getting a better idea of where Spaniards go during the day and night”, as the BBC summarised the scheme. Mobile networks Vodafone, Movistar, and Orange were all said to be handing over user data to the INE, with the bulk information fetching €500,000 – a sum split between all three firms.

Let me interject here that it’s practically impossible to anonymise data – and location data is incredibly personal, private and dangerous as seen by the US military having secret bases being exposed.

In April the initiative was reactivated for the so-called DataCovid plan, where the same type of bulk location data was used to identify areas where Spaniards were ignoring COVID-19 lockdown laws.

“The goal is to analyse the effect which the (confinement) measures have had on people’s movements, and see if people’s movements across the land are increasing or decreasing,” Spain’s government said at the time, as reported by expat news service The Local’s Iberian offshoot.

The DGT then apparently hit on the idea of using speed data derived from cell tower pings (in the same way that Google Maps, Waze, and other online services derive average road speed and congestion information) to identify locations where drivers may have been breaking the speed limit.

The Ideal news website seemed to put the obvious fears to bed in its report of the traffic police initiative when it posed the obvious, rhetorical, question: whether drivers can be fined based on mobile data.

“The answer is clear and direct: it is not possible,” it concluded. “The DGT can only fine us through the fixed and mobile radars that it has installed throughout the country.”

While the direction of travel here seems obvious to anyone with any experience of living in a western country that implements this type of dragnet mass surveillance, so far there is little evidence of an explicit link between mobile phone data-slurping and speed cameras or fines.

Back in 2016, TfL ran a “trial” tracking people’s movements by analysing where their MAC addresses popped up within the Tube network, also hoping to use this data to get higher prices for advertising spots at busy areas inside Tube stations. Dedicated public Wi-Fi spots on train platforms is now a permanent fixture in all but a few of the London Underground stations. The service is operated by Virgin Media, which is “free” to use by customers of the four mobile network operators, but collects your mobile number at the point of signing up.

And here you can see the ease with which mission creep comes out and people start using your data for all kinds of non-related things once they have it. This is why we shouldn’t allow governments or anyone else to get their grubby little hands on it and why we should be glad that at least at EU level, data privacy is taken seriously with GDPR and other laws.

Source: Spain’s highway agency is monitoring speeding hotspots using bulk phone location data • The Register

Firefox usage is down 85% despite Mozilla’s top exec pay going up 400%

Mozilla recently announced that they would be dismissing 250 people. That’s a quarter of their workforce so there are some deep cuts to their work too. The victims include: the MDN docs (those are the web standards docs everyone likes better than w3schools), the Rust compiler and even some cuts to Firefox development. Like most people I want to see Mozilla do well but those three projects comprise pretty much what I think of as the whole point of Mozilla, so this news is a a big let down.

The stated reason for the cuts is falling income. Mozilla largely relies on “royalties” for funding. In return for payment, Mozilla allows big technology companies to choose the default search engine in Firefox – the technology companies are ultimately paying to increase the number of searches Firefox users make with them. Mozilla haven’t been particularly transparent about why these royalties are being reduced, except to blame the coronavirus.

I’m sure the coronavirus is not a great help but I suspect the bigger problem is that Firefox’s market share is now a tiny fraction of its previous size and so the royalties will be smaller too – fewer users, so fewer searches and therefore less money for Mozilla.

The real problem is not the royalty cuts, though. Mozilla has already received more than enough money to set themselves up for financial independence. Mozilla received up to half a billion dollars a year (each year!) for many years. The real problem is that Mozilla didn’t use that money to achieve financial independence and instead just spent it each year, doing the organisational equivalent of living hand-to-mouth.

Despite their slightly contrived legal structure as a non-profit that owns a for-profit, Mozilla are an NGO just like any other. In this article I want to apply the traditional measures that are applied to other NGOs to Mozilla in order to show what’s wrong.

These three measures are: overheads, ethics and results.

Overheads

One of the most popular and most intuitive ways to evaluate an NGO is to judge how much of their spending is on their programme of works (or “mission”) and how much is on other things, like administration and fundraising. If you give money to a charity for feeding people in the third world you hope that most of the money you give them goes on food – and not, for example, on company cars for head office staff.

Mozilla looks bad when considered in this light. Fully 30% of all expenditure goes on administration. Charity Navigator, an organisation that measures NGO effectiveness, would give them zero out of ten on the relevant metric. For context, to achieve 5/10 on that measure Mozilla admin would need to be under 25% of spending and, for 10/10, under 15%.

Senior executives have also done very well for themselves. Mitchell Baker, Mozilla’s top executive, was paid $2.4m in 2018, a sum I personally think of as instant inter-generational wealth. Payments to Baker have more than doubled in the last five years.

As far as I can find, there is no UK-based NGO whose top executive makes more than £1m ($1.3m) a year. The UK certainly has its fair share of big international NGOs – many much bigger and more significant than Mozilla.

I’m aware that some people dislike overheads as a measure and argue that it’s possible for administration spending to increase effectiveness. I think it’s hard to argue that Mozilla’s overheads are correlated with any improvement in effectiveness.

Ethics

Mozilla now thinks of itself less as a custodian of the old Netscape suite and more as a ‘privacy NGO’. One slogan inside Mozilla is: “Beyond the Browser”.

Regardless of how they view themselves, most of their income comes from helping to direct traffic to Google by making that search engine the default in Firefox. Google make money off that traffic via a big targeted advertising system that tracks people across the web and largely without their consent. Indeed, one of the reasons this income is falling is because as Firefox’s usage falls less traffic is being directed Google’s way and so Google will pay less.

There is, as yet, no outbreak of agreement among the moral philosophers as to a universal code of ethics. However I think most people would recognise hypocrisy in Mozilla’s relationship with Google. Beyond the ethical problems, the relationship certainly seems to create conflicts of interest. Anyone would think that a privacy NGO would build anti-tracking countermeasures into their browser right from the start. In fact, this was only added relatively recently (in 2019), after both Apple (in 2017) and Brave (since release) paved the way. It certainly seems like Mozilla’s status as a Google vassal has played a role in the absence of anti-tracking features in Firefox for so long.

Another ethical issue is Mozilla’s big new initiative to move into VPNs. This doesn’t make a lot of sense from a privacy point of view. Broadly speaking: VPNs are not a useful privacy tool for people browsing the web. A VPN lets you access the internet through a proxy – so your requests superficially appear to come from somewhere other than they really do. This does nothing to address the main privacy problem for web users: that they are being passively tracked and de-anonymised on a massive scale by the baddies at Google and elsewhere. This tracking happens regardless of IP address.

When I tested Firefox through Mozilla VPN (a rebrand of Mullvad VPN) I found that I could be de-anonymised by browser fingerprinting – already a fairly widespread technique by which various elements of your browser are examined to create a “fingerprint” which can then be used to re-identify you later. Firefox, unlike some other browsers, does not include any countermeasures against this.

firefox's results on panopticlick - my browser has a unique fingerprint
Even when using Mozilla’s “secure and private” VPN, Firefox is trackable by browser fingerprinting, as demonstrated by the EFF’s Panopticlick tool. Other browsers use randomised fingerprints as a countermeasure against this tracking.

Another worry is that many of these privacy focused VPN services have a nasty habit of turning out to keep copious logs on user behaviour. A few months ago several “no log” VPN services inadvertently released terabytes of private user data that they had promised not to collect in a massive breach. VPN services are in a great position to eavesdrop – and even if they promise not to, your only option is to take them at their word.

Results

I’ve discussed the Mozilla chair’s impressive pay: $2.4m/year. Surely such impressive pay is justified by the equally impressive results Mozilla has achieved? Sadly on almost every measure of results both quantitative and qualitative, Mozilla is a dog.

Firefox is now so niche it is in danger of garnering a cult following: it has just 4% market share, down from 30% a decade ago. Mobile browsing numbers are bleak: Firefox barely exists on phones, with a market share of less than half a percent. This is baffling given that mobile Firefox has a rare feature for a mobile browser: it’s able to install extensions and so can block ads.

Yet despite the problems within their core business, Mozilla, instead of retrenching, has diversified rapidly. In recent years Mozilla has created:

  • a mobile app for making websites
  • a federated identity system
  • a large file transfer service
  • a password manager
  • an internet-of-things framework/standard
  • an email relay service
  • a completely new phone operating system
  • an AI division (but of course)
  • and spent $25 million buying the reading list management startup, Pocket

Many of the above are now abandoned.

Sadly Mozilla’s annual report doesn’t break down expenses on a per-project basis so it’s impossible to know how much of the spending that is on Mozilla’s programme is being spent on Firefox and how much is being spent on all these other side-projects.

What you can at least infer is that the side-projects are expensive. Software development always is. Each of the projects named above (and all the other ones that were never announced or that I don’t know about) will have required business analysts, designers, user researchers, developers, testers and all the other people you need in order to create a consumer web project.

The biggest cost of course is the opportunity cost of just spending that money on other stuff – or nothing: it could have been invested to build an endowment. Now Mozilla is in the situation where apparently there isn’t enough money left to fully fund Firefox development.

What now?

Mozilla can’t just continue as before. At the very least they need to reduce their expenses to go along with their now reduced income. That income is probably still pretty enormous though: likely hundreds of millions a year.

I’m a Firefox user (and one of the few on mobile, apparently) and I want to see Mozilla succeed. As such, I would hope that Mozilla would cut their cost of administration. I’d also hope that they’d increase spending on Firefox to make it faster and implement those privacy features that other browsers have. Most importantly: I’d like them to start building proper financial independence.

I doubt those things will happen. Instead they will likely keep the expensive management. They have already cut spending on Firefox. Their great hope is to continue trying new things, like using their brand to sell VPN services that, as I’ve discussed, do not solve the problem that their users have.

Instead of diversifying into yet more products and services Mozilla should probably just ask their users for money. For many years the Guardian newspaper (a similarly sized organisation to Mozilla in terms of staff) was a financial basket case. The Guardian started asking their readers for money a few years ago and seems to be on firmer financial footing since.

Getting money directly has also helped align the incentives of their organisation with those of their readers. Perhaps that would work for Mozilla. But then, things are different at the Guardian. Their chief exec makes a mere £360,000 a year.

Source: Firefox usage is down 85% despite Mozilla’s top exec pay going up 400%

MS Edge and Google Chrome are winning the renewed browser wars and this kind of financial playing isn’t helping Firefox, who I really want to win on ethical considerations. It’s just not helping.

Facebook says it may quit Europe over ban on sharing data with US

Facebook has warned that it may pull out of Europe if the Irish data protection commissioner enforces a ban on sharing data with the US, after a landmark ruling by the European court of justice found in July that there were insufficient safeguards against snooping by US intelligence agencies.

In a court filing in Dublin, Facebook’s associate general counsel wrote that enforcing the ban would leave the company unable to operate.

“In the event that [Facebook] were subject to a complete suspension of the transfer of users’ data to the US,” Yvonne Cunnane argued, “it is not clear … how, in those circumstances, it could continue to provide the Facebook and Instagram services in the EU.”

Facebook denied the filing was a threat, arguing in a statement that it was a simple reflection of reality. “Facebook is not threatening to withdraw from Europe,” a spokesperson said.

“Legal documents filed with the Irish high court set out the simple reality that Facebook, and many other businesses, organisations and services, rely on data transfers between the EU and the US in order to operate their services. A lack of safe, secure and legal international data transfers would damage the economy and hamper the growth of data-driven businesses in the EU, just as we seek a recovery from Covid-19.”

The filing is the latest volley in a legal battle that has lasted almost a decade. In 2011, Max Schrems, an Austrian lawyer, began filing privacy complaints with the Irish data protection commissioner, which regulates Facebook in the EU, about the social network’s practices.

Those complaints gathered momentum two years later, when the Guardian revealed the NSA’s Prism program, a vast surveillance operation involving direct access to the systems of Google, Facebook, Apple and other US internet companies. Schrems filed a further privacy complaint, which was eventually referred to the European court of justice.

That court found in 2015 that, because of the existence of Prism, the “Safe Harbour” agreement, which allowed US companies to transfer the data of EU citizens back home, was invalid.

The EU then attempted a second legal agreement for the data transfers, a so-called privacy shield; that too was invalidated in July this year, with the court again ruling that the US does not limit surveillance of EU citizens.

In September, the Irish data protection commissioner began the process of enforcing that ruling. The commissioner issued a preliminary order compelling the social network to suspend data transfers overseas.

In response, Nick Clegg, the company’s head of global affairs and communications, published a blogpost that argued that “international data transfers underpin the global economy and support many of the services that are fundamental to our daily lives”.

“In the worst-case scenario, this could mean that a small tech start-up in Germany would no longer be able to use a US-based cloud provider,” he wrote. “A Spanish product development company could no longer be able to run an operation across multiple time zones. A French retailer may find they can no longer maintain a call centre in Morocco.”

Clegg added: “We support global rules that can ensure consistent treatment of data around the world.”

Source: Facebook says it may quit Europe over ban on sharing data with US | Technology | The Guardian

Yep, mr Clegg. But the law is the law. And it’s a good law. Having EU Citizens’ private data in the hands of the megalomanic 4th Reich US government is not a good idea – in the EU people like the idea of having rights and privacy.

Trump Pushes to Reap Extensive Biometric Data From Immigrants, Americans, never delete them

Six million would-be U.S. immigrants face expanded collection of their biometric data, including iris scans, palm-, and voice-prints, facial recognition images, and DNA, under a proposed federal rule. The Department of Homeland Security also for the first time would gather that data from American citizens sponsoring or benefiting from a visa application.

Years in the making, the biometrics immigration rule has garnered more than 160 comments since its Sept. 11 publication. The 30-day comment period closes on Oct 13. A final version could be in place by Inauguration Day.

Immigration and privacy advocates have voiced concerns over who will have to comply with the new requirements, why President Donald Trump is making this push so late in his term, and what it means for a federal agency already claiming a lack of resources.

“The only words to describe this proposed rule is breathtaking,” said Doug Rand, who worked on technology and immigration policy in the Obama White House and then joined the Federation of American Scientists. “It’s clearly designed to drastically expand surveillance of immigrants, U.S. citizens, employers.”

The 300-plus-page plan updates current biometrics requirements so that “any applicant, petitioner, sponsor, beneficiary, or individual filing or associated with an immigration benefit or request, including U.S. citizens, must appear for biometrics collection without regard to age unless the agency waives or exempts the requirement.”

The DHS estimates an additional 2.17 million new biometrics submissions will be collected annually, an increase from the current 3.9 million, under the rule.

[…]

The DHS already collects fingerprints from some visa applicants. The new rule would expand that biometrics-gathering to iris images, palm- and voice- prints. The agency wants authority to require or request DNA testing to prove familial relationships where kinship is in question. The DNA data could be stored indefinitely, under the proposed rule.

[…]

While the current proposal doesn’t expressly reference employers, that doesn’t mean it couldn’t be applied to employer-backed visa holders down the road, said Michael Nowlan, co-leader of Clark Hill’s Immigration Business unit. “It’s just amazing to me how broad this is.”

One potential scenario for employers petitioning for visa-holding workers or sponsoring foreign workers for green cards is that legal counsel or even a human resources officer may be required to submit biometrics on the company’s behalf.

[…]

Should Trump win re-election, his administration can use this period of uncertainty to accelerate this regulation and carry it out in the new year. If Trump loses, and his team makes it final it before Democrat Joe Biden takes office, it’s a “huge headache” for the next administration, Rand said.

“It’s basically like burning down the house on your way out,” Rand said.

Source: Trump Pushes to Reap Biometric Data From Immigrants, Americans

This kind of data is dangerous in and of itself. Keeping it in a centralised database is a horrible idea – history has shown us again and again that these are abused and unsafe. And this is data about people that the people themselves, as well as their families, descendants, can’t change. Ever.

Facebook Accused of Watching Instagram Users Through Cameras. FB claims “bug”

Facebook is again being sued for allegedly spying on Instagram users, this time through the unauthorized use of their mobile phone cameras. Bloomberg reports: The lawsuit springs from media reports in July that the photo-sharing app appeared to be accessing iPhone cameras even when they weren’t actively being used. Facebook denied the reports and blamed a bug, which it said it was correcting, for triggering what it described as false notifications that Instagram was accessing iPhone cameras.

In the complaint filed Thursday in federal court in San Francisco, New Jersey Instagram user Brittany Conditi contends the app’s use of the camera is intentional and done for the purpose of collecting “lucrative and valuable data on its users that it would not otherwise have access to.” By “obtaining extremely private and intimate personal data on their users, including in the privacy of their own homes,” Instagram and Facebook are able to collect “valuable insights and market research,” according to the complaint.

Source: Facebook Accused of Watching Instagram Users Through Cameras – Slashdot

Google bans stalkerware apps from Android store. Which is cool but… why were they allowed in the first place?

In an update to its Android Developer Program Policy, Google on Wednesday said stalkerware apps in its app store can no longer be used to stalk non-consenting adults.

Stalkerware, which the web giant defines as “code that transmits personal information off the device without adequate notice or consent and doesn’t display a persistent notification that this is happening,” may still be used for keeping track of one’s kids.

But starting October 1, 2020, the ad biz says it’s no longer acceptable for Android apps in the Google Play Store to track another person, such as a spouse, without permission, unless there’s a persistent visible notification that data is being transmitted.

The ban follows a similar prohibition in August on Google-served ads for “spyware and technology used for intimate partner surveillance,” which reportedly hasn’t worked very well.

In recent years, computer security experts have argued that the privacy and security risks in intimate relationships remain haven’t been adequately anticipated or addressed.

But rules against invasive behavior aren’t necessarily effective. Via Twitter, Michael Veale, a lecturer at University College London, observed that a 2018 research paper “found that ‘abusers frequently exploit dual-use applications—tools whose main purpose is legitimate but that can be easily repurposed to function as spyware,’ so banning explicit stalkerware of questionable efficacy.”

Google will continue to allow non-stalkerware apps (i.e. policy compliant apps) to monitor and track people, provided the programs are not marketed as surveillance apps, they disclose any such functions, and they present the requisite persistent notification and icon.

Monitoring apps of the permissible sort continue to be subject to removal for violating applicable laws in the locations where they’re published, and may not link to resources (e.g. servers, SDKs) that provide policy violating functions or non-compliant APKs hosted outside the Google Play Store.

Google’s developer policy update also includes a ban on misrepresentation, both for apps and developer accounts. Apps or accounts that impersonate a person or organization, or attempt to conceal the app’s purpose or ownership, or engage in coordinated misleading activity, are no longer allowed.

Source: Google bans stalkerware apps from Android store. Which is cool but… why were they allowed in the first place? • The Register

To answer the question: The tech giants will do almost anything to get  your location information because it allows them to know and control you better.

The Weather Channel app settles suit over selling location data of 49m people without consent

Private Intel Firm Buys Location Data to Track People to their ‘Doorstep’ sourced from innocuous seeming apps

How Location Tracking Actually Works on Your Smartphone (and how to manipulate it – kind of)

Google collects Android location data even if you turn it off and don’t have a SIM card inserted

US carmakers collect and keep driven locations

And some more links

The Weather Channel app settles suit over selling location data of 49m people without consent

IBM and the Los Angeles city attorney’s office have settled a privacy lawsuit brought after The Weather Channel app was found to be selling user location data without proper disclosure. The lawsuit was filed last year, at which point the app had 45 million active users.

IBM has changed the way that users are informed, and also agreed to donate $1M worth of technology to assist LA County with its coronavirus contact tracing efforts …

 

Associated Press reports.

The operator of The Weather Channel mobile app has agreed to change how it informs users about its location-tracking practices and sale of personal data as part of a settlement with the Los Angeles city attorney’s office, officials said Wednesday.

City Attorney Mike Feuer alleged in a 2019 lawsuit that app users were misled when they agreed to share their location information in exchange for personalized forecasts and alerts. Instead, the lawsuit claimed users were unaware they had surrendered personal privacy when the company sold their data to third parties.

Feuer announced the settlement Wednesday with the app’s operator, TWC Product and Technology LLC, and owner IBM Corp. The app’s disclosure screens were initially revised after the lawsuit was filed and future changes that will be monitored by the city attorney’s office are planned.

Source: The Weather Channel app settles suit over selling location data – 9to5Mac

Italy is investigating Apple, Google and Dropbox cloud storage services

Italy’s competition watchdog is investing Apple, Google and Dropbox, TechCrunch reports. In a press release, the AGCM announced that it opened six investigations into the companies’ cloud storage services: Google Drive, iCloud and Dropbox.

The authority is concerned that the services fail to adequately explain how user data will be collected and used for commercial purposes. It’s also investigating unfair clauses in the services’ contracts, terms that exempt the services from some liability and the prevalence of English versions of contracts over Italian versions.

In July, Italy launched an antitrust investigation into Amazon and Apple over Beats headphones. Authorities want to know whether the two companies agreed to prevent retailers outside of Apple’s official program from selling Beats and other Apple products.

Big tech companies are facing increased pressure from antitrust regulators in the US and Europe. The US Department of Justice may present its case against Google later this month. Apple is in a battle with Epic over its App Store rules, and the antitrust case against Amazon keeps getting stronger. It’s hard to say how effective any of these investigations will be at changing the industry’s behavior.

Source: Italy is investigating Apple, Google and Dropbox cloud storage services | Engadget

This is why monopolies are bad

Australia starts second fight with Google and Apple, this time over whether app stores leak data, gouge devs, steal ideas and warp markets

Australia, already embroiled in a nasty fight with Google and Facebook over its plan to make them pay for news links, has opened an inquiry into whether Apple and Google’s app stores offer transparent pricing and see consumers’ data used in worrying ways.

The issues paper [PDF] outlining the scope of the inquiry names only Apple and Google as of interest. The paper also mentions the recent Apple/Epic spat over developer fees to access the app store and proposes to ponder sideloading as a means of bypassing curated stores.

The Australian Competition and Consumer Commission, which will conduct the inquiry, has set out the following matters it wishes to probe:

  1. The ability and incentive for Apple and Google to link or bundle their other goods and services with their app marketplaces, and any effect this has on consumers and businesses.
  2. How Apple and Google’s various roles as the key suppliers of app marketplaces, but also as app developers, operators of the mobile licensing operating system and device manufacturers affect the ability of third party app providers to compete, including the impact of app marketplace fee structures on rivals’ costs.
  3. Terms, conditions and fees (including in-app purchases) imposed on businesses to place apps on app marketplaces.
  4. The effect of app marketplace fee structures on innovation.
  5. How app marketplaces determine whether an app is allowed on their marketplace, and the effect of this on app providers, developers and consumers;
  6. How where an app is ranked in an app marketplace is determined.
  7. The collection and use of consumer data by app marketplaces, and whether consumers are sufficiently informed about and have control over the extent of data that is collected.
  8. Whether processes put in place by app marketplaces to protect consumers from harmful apps are working.The document also reveals an intention to probe whether app store operators “identify which product development ideas are successful and emulate these ideas in their own apps” and seeks “views on the data sharing arrangements between apps and app marketplaces, and any views on the potential for app marketplaces to use data to identify, and respond to, potential competitors to the marketplace’s own apps.”

The Commission has created a survey for consumers and another for developers . The latter asks for comment on “adequacy of communications from the app store during the review process” and the experience of appealing decisions. Which should make for some tasty reading once the inquiry reports in March 2021.

The ACCC lists “legislative reform to address systemic issues” as one possible outcome from the inquiry. Which would be tastier still, given the furor over Australia’s current proposed laws.

Source: Australia starts second fight with Google, this time over whether app stores leak data, gouge devs, steal ideas and warp markets • The Register

I spoke of this in Zagreb at Dors/Cluc 2019 – it’s interesting to see how this is being picked up all over the world

7 years later, US court deems NSA bulk phone-call snooping illegal, possibly unconstitutional, and probably pointless anyway

The United States Court of Appeals for the Ninth Circuit has ruled [PDF] that the National Security Agency’s phone-call slurping was indeed naughty, seven years after former contractor Edward Snowden blew the whistle on the tawdry affair.

It’s been a long time coming, and while some might view the decision as a slap for officials that defended the practice, the three-judge panel said the part played by the NSA programme wasn’t sufficient to undermine the convictions of four individuals for conspiring to send funds to Somalia in support of a terrorist group.

Snowden made public the existence of the NSA data collection programmes in June 2013, and by June 2015 US Congress had passed the USA FREEDOM Act, “which effectively ended the NSA’s bulk telephony metadata collection program,” according to the panel.

The panel took a long, hard look at the metadata collection programme, which slurped the telephony of millions of Americans (as well as at least one of the defendants) and concluded that not only had the Fourth Amendment of the constitution likely been violated, it certainly flouted section 1861 of the Foreign Intelligence Surveillance Act (FISA), which deals with access to business records in foreign intelligence and international terrorism investigations.

“On the merits,” the ruling said, “the panel held that the metadata collection exceeded the scope of Congress’s authorization in 50 U.S.C. § 1861, which required the government to make a showing of relevance to a particular authorized investigation before collecting the records, and that the program therefore violated that section of FISA.”

So, both illegal and quite possibly unconstitutional.

It isn’t a good look for the intelligence services. The panel was able to study the classified records and noted that “the metadata did not and was not necessary to support the requisite probable cause showing for the FISA Subchapter I warrant application in this case.”

The panel went on to administer a light slapping to those insisting that the metadata programme was an essential element in the case. The evidence, such as it was, “did not taint the evidence introduced by the government at trial,” the panel observed before going on to say: “To the extent the public statements of government officials created a contrary impression, that impression is inconsistent with the contents of the classified record.”

Thus not only illegal, possibly unconstitutional but also not particularly helpful in this instance, no matter what officials might have insisted.

While the American Civil Liberties Union (ACLU) declared the ruling “a victory for our privacy rights”, the process could have a while to run yet, including a trip to America’s Supreme Court

Source: US court deems NSA bulk phone-call snooping illegal, possibly unconstitutional, and probably pointless anyway • The Register

After Facebook Balks, Apple Delays “Privacy” (ie only Apple spies on you) Feature

In June, Apple unveiled plans for an iOS 14 privacy update that forces developers to gather users’ consent before tracking their activities across third-party apps and websites. Needless to say, giving users more control over how their information is gathered and trafficked is expected to bruise advertisers—especially Facebook, which uses that information to narrow its targeting functions.

As the initial autumn deadline closed in, Facebook protested last week that the change could render Facebook’s Audience Network—its ad service offered to third-party apps—“so ineffective on iOS 14 that it may not make sense to offer it on iOS 14 in the future.” The company claimed that blocking personalization is expected to cut Audience Network revenue by half or more, and that the move would hurt the over 19,000 developers who work with Facebook, many of which are “small businesses that depend on ads to support their livelihood.”

Apple’s messaging to users, as illustrated in the latest promo images for iOS 14, doesn’t give surveillance a nice ring. It will tell you bluntly that such-and-such app “would like permission to track you across apps and websites owned by other companies.” Apple pointed out to Gizmodo that it still embraces in-app advertising and does not prohibit tracking. In fact, Facebook can still gather that data (using Apple’s advertiser ID), if it’s willing to ask iOS users to agree to be tracked (using that scary messaging.) But both Apple and Facebook know that the data collection business operates more smoothly when begging for forgiveness later rather than asking permission now. If not, companies wouldn’t have mastered the art of doublespeak and constructed labyrinthine settings menus.

Apple, on the other hand, will still be able to benefit from gathering your information in various ways without asking permission because Apple doesn’t necessarily need to share or gather your information with data brokers and outside companies—your data is already growing organically within Apple’s walled garden. For example, Apple might show you an ad for a weight loss app in the App Store based on the fact that you read an article from a lifestyle publication in the Apple News app—a function which is automatically enabled, and can be toggled off, under “Apple Advertising.” Similarly, Apple says that developers can use data gained from activity within their own apps through Apple’s vendor-specific identifier. (Apple says that the “tracking” prompt would still show up if Apple-created apps intend to share information beyond Apple.)

But it’s hard to imagine a competing vendor that would have access to such a sprawling network of native data, aside from Google, which has its own devices and browser and advertiser ID. And sticking the notification on Facebook polishes Apple’s self-fashioned reputation a big tech company which values privacy. (It is not.)

[…]

Apple says that now apps won’t need to ask users permission to be tracked until 2021, “to give developers time to make necessary changes.” Apple will also require developers to submit details on the data their apps collect—including “sensitive information” such as race, sexual orientation, disability, and political affiliation—which will be published in the App Store later this year.

Source: After Facebook Balks, Apple Delays Privacy Feature

Private Intel Firm Buys Location Data to Track People to their ‘Doorstep’ sourced from innocuous seeming apps

A threat intelligence firm called HYAS, a private company that tries to prevent or investigates hacks against its clients, is buying location data harvested from ordinary apps installed on peoples’ phones around the world, and using it to unmask hackers. The company is a business, not a law enforcement agency, and claims to be able to track people to their “doorstep.”

The news highlights the complex supply chain and sale of location data, traveling from apps whose users are in some cases unaware that the software is selling their location, through to data brokers, and finally to end clients who use the data itself. The news also shows that while some location firms repeatedly reassure the public that their data is focused on the high level, aggregated, pseudonymous tracking of groups of people, some companies do buy and use location data from a largely unregulated market explicitly for the purpose of identifying specific individuals.

HYAS’ location data comes from X-Mode, a company that started with an app named “Drunk Mode,” designed to prevent college students from making drunk phone calls and has since pivoted to selling user data from a wide swath of apps. Apps that mention X-Mode in their privacy policies include Perfect365, a beauty app, and other innocuous looking apps such as an MP3 file converter.

“As a TI [threat intelligence] tool it’s incredible, but ethically it stinks,” a source in the threat intelligence industry who received a demo of HYAS’ product told Motherboard. Motherboard granted the source anonymity as they weren’t authorized by their company to speak to the press.

[…]

HYAS differs in that it provides a concrete example of a company deliberately sourcing mobile phone location data with the intention of identifying and pinpointing particular people and providing that service to its own clients. Independently of Motherboard, the office of Senator Ron Wyden, which has been investigating the location data market, also discovered HYAS was using mobile location data. A Wyden aide said they had spoken with HYAS about the use of the data. HYAS said the mobile location data is used to unmask people who may be using a Virtual Private Network (VPN) to hide their identity, according to the Wyden aide.

In a webinar uploaded to HYAS’ website, Todd Thiemann, VP of marketing at the company, describes how HYAS used location data to track a suspected hacker.

“We found out it was the city of Abuja, and on a city block in an apartment building that you can see down there below,” he says during the webinar. “We found the command and control domain used for the compromised employees, and used this threat actor’s login into the registrar, along with our geolocation granular mobile data to confirm right down to his house. We also got his first and last name, and verified his cellphone with a Nigerian mobile operator.”

hyas-webinar.png

A screenshot of a webinar given by HYAS, in which the company explains how it has used mobile application location data.

On its website, HYAS claims to have some Fortune 25 companies, large tech firms, as well as law enforcement and intelligence agencies as clients.

[…]

Customers can include banks who want to get a heads-up on whether a freshly dumped cache of stolen credit card data belongs to them; a retailer trying to protect themselves from hackers; or a business checking if any of their employees’ login details are being traded by cybercriminals.

Some threat intelligence companies also sell services to government agencies, including the FBI, DHS, and Secret Service. The Department of Justice oftens acknowledges the work of particular threat intelligence companies in the department’s announcement of charges or indictments against hackers and other types of criminals.

But some other members of the threat intelligence industry criticized HYAS’ use of mobile app location data. The CEO of another threat intelligence firm told Motherboard that their company does not use the same sort of information that HYAS does.

The threat intelligence source who originally alerted Motherboard to HYAS recalled “being super shook at how they collected it,” referring to the location data.

A senior employee of a third threat intelligence firm said that location data is not hard to buy.

[…]

Motherboard found several location data companies that list HYAS in their privacy policies. One of those is X-Mode, a company that plants its own code into ordinary smartphone apps to then harvest location information. An X-Mode spokesperson told Motherboard in an email that the company’s data collecting code, or software development kit (SDK), is in over 400 apps and gathers information on 60 million global monthly users on average. X-Mode also develops some of its own apps which use location data, including parental monitoring app PlanC and fitness tracker Burn App.

“Whatever your need, the XDK Visualizer is here to show you that our signature SDK is too legit to quit (literally, it’s always on),” the description for another of X-Code’s own apps, which visualizes the company’s data collection to attract clients, reads.

“They’re like many location trackers but seem more aggressive to be honest,” Will Strafach, founder of the app Guardian, which alerts users to other apps accessing their location data, told Motherboard in an online chat. In January, X-Mode acquired the assets of Location Sciences, another location firm, expanding X-Mode’s dataset.

[…]

Motherboard then identified a number of apps whose own privacy policies mention X-Mode. They included Perfect365, a beauty-focused app that people can use to virtually try on different types of makeup with their device’s camera.

[…]

Various government agencies have bought access to location data from other companies. Last month, Motherboard found that U.S. Customs and Border Protection (CBP) paid $476,000 to a firm that sells phone location data. CBP has used the data to scan parts of the U.S. border, and the Internal Revenue Service (IRS) tried to use the same data to track criminal suspects but was unsuccessful.

Source: Private Intel Firm Buys Location Data to Track People to their ‘Doorstep’

COVID-19 tracing without an app? Google and Apple will ram it down your throat

Google and Apple have updated their COVID-19 contact-tracing tool to make it possible to notify users of potential exposures to the novel coronavirus without an app.

The new Exposure Notifications Express spec is baked into iOS 13.7, which emerged this week and will appear in an Android update due later this month.

This is not, repeat not, pervasive Bluetooth surveillance. The tool requires users to opt in, although public health authorities can use the tool to send notifications suggesting that residents do so.

Those who choose to participate agree to have their device use Bluetooth to search for other nearby opted-in devices, with an exchange of anonymised identifiers used to track encounters. If a user tests positive, and agrees to notify authorities, other users will be told that they are at risk and should act accordingly.

The update is designed to let health authorities use Bluetooth-powered contact-tracing without having to build their own apps. It’s still non-trivial to play, as the system requires one server to verify test results and another to run both contact-tracing apps and the app-free service.

Apple has published a succinct explainer here and Google has offered up code for notifications server on GitHub.

A couple of dozen US states have signed up for the new tool but other jurisdictions – among them India, Singapore and Australia – are persisting with their own approaches on the basis that the Apple/Google tech makes it harder for their manual contact-tracers to access information.

Source: COVID-19 tracing without an app? There’s an iOS and Android update for that • The Register

Considering the work both companies do with China and other friendly states, it would not surprise me that the “user opt in” feature becomes an “all users opt in without their knowing because the state is the people and the state knows best” feature in some places.

US Border Patrol Says They Can Create Central Repository Of Traveler Emails, calendar, etc, Keep Them For 75 Years

The U.S. government has taken the opportunity during the global pandemic, when people aren’t traveling out of the country much, to roll out a new platform for storing information they believe they are entitled to take from people crossing the border. A new filing reveals how the U.S. Border Patrol will store data from traveler devices centrally, keeping it backed up and searchable for up to 75 years.

On July 30 the Department of Homeland Security published a privacy impact assessment detailing the electronic data that they may choose to collect from people crossing the border – and what happens to that data.

  • Border Patrol claims the right to search laptops, thumb drives, cell phones, and other
    devices capable of storing electronic information” and when they call it a ‘border search the can do this not just when you’re “crossing the U.S. border” in either direction (i.e. when you’re leaving, not just when you’re entering the country) but even “at the extended border” which generally means within 100 miles of the border, which encompasses where two-thirds of the U.S. population lives.
  • They needed an updated privacy impact assessment because of a new “enterprise-wide solution to manage and analyze certain types of information and metadata USBP collects from electronic devices” – and they they actually keep on file.

Border Patrol will “acquire a mirror copy of the data on the device” they take from a traveler and store it locally. Before uploading it to their network they check to make sure there’s no porn on it (so they search your devices to find porn first). Then once they’ve determined it’s “clean” they transfer the data first to an encrypted thumb drive and then to the Border Patrol-side system called PLX.

Examples of what they plan to keep from travelers’ devices include e-mails; videos and pictures; texts and chat messages; financial accounts and transactions; location history; web browser bookmarks; tasks list; calendar; call logs; contracts. Information is stored for 75 years although if it’s not related to any crime it may be deleted after 20 years.

The government emphasizes they’ve been collecting this information, what’s changed is simply that they’ll be storing it in a central system where everything “will now by accessible to a larger number of USBP agents with no nexus” to suspected illegal activity. They promise, though, to restrict access and train staff not to do anything they aren’t supposed to. And they don’t see risk to privacy because they’ve published a notice (that I’m now writing about) telling you how your privacy may be violated.

Electronic device searches have been on the rise. Between October 2008 and June 2010 6500 devices were searched. In 2016 there were 10,000 device searches, and 30,200 in 2017.

It’s not clear though that these searches are all actually legal. In November 2019 a federal judge in Boston ruled that forensic searches of cell phones require at least reasonable suspicion “that the devices contain contraband.”

Source: US Border Patrol Says They Can Create Central Repository Of Traveler Emails, Keep Them For 75 Years – View from the Wing

235 Million Instagram, TikTok And YouTube User Profiles Exposed In Massive Data Leak

it was such an unsecured database that the Comparitech researchers, led by Bob Diachenko, discovered on August 1, leaving the personal profile data of nearly 235 million Instagram, TikTok and YouTube users up for grabs.

The data was spread across several datasets; the most significant being two coming in at just under 100 million each and containing profile records apparently scraped from Instagram. The third-largest was a dataset of some 42 million TikTok users, followed by just under 4 million YouTube user profiles.

MORE FROM FORBESGot An Email From A Hacker With Your Password? Do These 3 Things

Comparitech says that, based on the samples it collected, one in five records contained either a telephone number or email address. Every record also included at least some, sometimes all, the following information:

  • Profile name
  • Full real name
  • Profile photo
  • Account description

Statistics about follower engagement, including:

  • Number of followers
  • Engagement rate
  • Follower growth rate
  • Audience gender
  • Audience age
  • Audience location
  • Likes
  • Last post timestamp
  • Age
  • Gender

“The information would probably be most valuable to spammers and cybercriminals running phishing campaigns,” Paul Bischoff, Comparitech editor, says. “Even though the data is publicly accessible, the fact that it was leaked in aggregate as a well-structured database makes it much more valuable than each profile would be in isolation,” Bischoff adds. Indeed, Bischoff told me that it would be easy for a bot to use the database to post targeted spam comments on any Instagram profile matching criteria such as gender, age or number of followers.

Tracing the source of the leaked data

So, where did all this data originate? The researchers suggest that the evidence, including dataset names, pointed to a company called Deep Social. However, Deep Social was banned by both Facebook and Instagram in 2018 after scraping user profile data. The company was wound down sometime after this.

A Facebook company spokesperson told me that “scraping people’s information from Instagram is a clear violation of our policies. We revoked Deep Social’s access to our platform in June 2018 and sent a legal notice prohibiting any further data collection.”

Once the researchers found the database and the clues to its origin, “we sent an alert to Deep Social, assuming the data belonged to them,” Bischoff says. The administrators of Deep Social then forwarded the disclosure to a Hong Kong-registered social media influencer data-marketing company called Social Data. “Social Data shut down the database about three hours after our initial email,” Bischoff says.

[…]

Source: 235 Million Instagram, TikTok And YouTube User Profiles Exposed In Massive Data Leak

Securus sued for ‘recording attorney-client jail calls, handing them to cops’ – months after settling similar lawsuit and charging more than 100x normal price for the calls. Hey, monopolies!

Jail phone telco Securus provided recordings of protected attorney-client conversations to cops and prosecutors, it is claimed, just three months after it settled a near-identical lawsuit.

The corporate giant controls all telecommunications between the outside world and prisoners in American jails that contract with it. It charges far above market rate, often more than 100 times, while doing so.

It has now been sued by three defense lawyers in Maine, who accuse the corporation of recording hundreds of conversations between them and their clients – something that is illegal in the US state. It then supplied those recordings to jail administrators and officers of the law, the attorneys allege.

Though police officers can request copies of convicts’ calls to investigate crimes, the cops aren’t supposed to get attorney-client-privileged conversations. In fact, these chats shouldn’t be recorded in the first place. Yet, it is claimed, Securus not only made and retained copies of these sensitive calls, it handed them to investigators and prosecutors.

“Securus failed to screen out attorney-client privileged calls, and then illegally intercepted these calls and distributed them to jail administrators who are often law enforcers,” the lawsuit [PDF] alleged. “In some cases the recordings have been shared with district attorneys.”

The lawsuit claims that over 800 calls covering 150 inmates and 30 law firms have been illegally recorded in the past 12 months, and it provides a (redacted) spreadsheet of all relevant calls.

[…]

Amazingly, this is not the first time Securus has been accused of this same sort of behavior. Just three months ago, in May this year, the company settled a similar class-action lawsuit this time covering jails in California.

That time, two former prisoners and a criminal defense attorney sued Securus after it recorded more than 14,000 legally protected conversations between inmates and their legal eagles. Those recordings only came to light after someone hacked the corp’s network and found some 70 million stored conversations, which were subsequently leaked to journalists.

[…]

Securus has repeatedly come under fire for similar complaints of ethical and technological failings. It was at the center of a huge row over location data after it was revealed it was selling location data on people’s phones to the police through a web portal.

The telecoms giant was also criticized for charging huge rates for video calls, between $5.95 and $7.99 for a 20-minute call, at a jail where the warden banned in-person visits but still required relatives to travel to the jail and sit in a trailer in the prison’s parking lot to talk to their loved ones through a screen.

Securus is privately held so it doesn’t make its financial figures public. A leak in 2014 revealed that it made a $115m profit on $405m in revenue for that year.

Source: Securus sued for ‘recording attorney-client jail calls, handing them to cops’ – months after settling similar lawsuit • The Register

US Secret Service Bought Access to Bable Street’s Locate X Spy Tool for warrantless surveillance

Babel Street is a shadowy organization that offers a product called Locate X that is reportedly used to gather anonymized location data from a host of popular apps that users have unwittingly installed on their phones. When we say “unwittingly,” we mean that not everyone is aware that random innocuous apps are often bundling and anonymizing their data to be sold off to the highest bidder.

Back in March, Protocol reported that U.S. Customs and Border Protection had a contract to use Locate X and that sources inside the secretive company described the system’s capabilities as allowing a user “to draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled, going back months.”

Protocol’s sources also said that the Secret Service had used the Locate X system in the course of investing a large credit card skimming operation. On Monday, Motherboard confirmed the investigation when it published an internal Secret Service document it acquired through a Freedom of Information Act (FOIA) request. (You can view the full document here.)

The document covers a relationship between Secret Service and Babel Street from September 28, 2017, to September 27, 2018. In the past, the Secret Service has reportedly used a seperate social media surveillance product from Babel Street, and the newly-released document totals fees paid after the addition of the Locate X license as $1,999,394.

[…]

Based on Fourth Amendment protections, law enforcement typically has to get a warrant or court order to seek to obtain Americans’ location data. In 2018, the Supreme Court ruled that cops still need a warrant to gather cellphone location data from network providers. And while law enforcement can obtain a warrant for specific cases as it seeks to view location data from a specific region of interest at a specific time, the Locate X system saves government agencies the time of going through judicial review with a next-best-thing approach.

The data brokerage industry benefits from the confusion that the public has about what information is collected and shared by various private companies that are perfectly within their legal rights. You can debate whether it’s acceptable for private companies to sell this data to each other for the purpose of making profits. But when this kind of sale is made to the U.S. government, it’s hard to argue that these practices aren’t, at least, violating the spirit of our constitutional rights.

Source: Secret Service Bought Access to Bable Street’s Locate X Spy Tool

New Toyotas will upload data to AWS to help create custom insurance premiums based on driver behaviour, send your data to others too

Toyota already operates a “Mobility Services Platform” that it says helps it to “develop, deploy, and manage the next generation of data-driven mobility services for driver and passenger safety, security, comfort, and convenience”.

That data comes from a device called the “Data Communication Module” (DCM) that Toyota fits into many models in Japan, the USA and China.

Toyota reckons the data could turn into “new contextual services such as car share, rideshare, full-service lease, and new corporate and consumer services such as proactive vehicle maintenance notifications and driving behavior-based insurance.”

Toyota's connected car vision

Toyota’s connected car vision. Click to enlarge

The company has touted that vision since at least the year 2016, but precious little evidence of it turning into products is available.

Which may be why Toyota has signed with AWS for not just cloud tech but also professional services.

The two companies say their joint efforts “will help build a foundation for streamlined and secure data sharing throughout the company and accelerate its move toward CASE (Connected, Autonomous/Automated, Shared and Electric) mobility technologies.”

Neither party has specified just which bits of the AWS cloud Toyota will take for a spin but it seems sensible to suggest the auto-maker is going to need lots of storage and analytics capabilities, making AWS S3 and Kinesis likely candidates for a test drive.

Whatever Toyota uses, prepare for privacy ponderings because while cheaper car insurance sounds lovely, having an insurer source driving data from a manufacturer has plenty of potential pitfalls.

Source: Oh what a feeling: New Toyotas will upload data to AWS to help create custom insurance premiums based on driver behaviour • The Register

No, this isn’t a good thing and I hope there’s an opt out

Privacy Shield no longer valid: Joint Press Statement from U.S. Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders

The U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case. This judgment declared that this framework is no longer a valid mechanism to transfer personal data from the European Union to the United States.

The European Union and the United States recognize the vital importance of data protection and the significance of cross-border data transfers to our citizens and economies. We share a commitment to privacy and the rule of law, and to further deepening our economic relationship, and have collaborated on these matters for several decades.

Source: Joint Press Statement from U.S. Secretary of Commerce Wilbur Ross and European Commissioner for Justice Didier Reynders | U.S. Department of Commerce

Lawmakers Ask California DMV How It Makes $50 Million a Year Selling Drivers’ Data

A group of nearly a dozen lawmakers led by member of Congress Anna Eshoo wrote to the California Department of Motor Vehicles (DMV) on Wednesday looking for answers on how and why the organization sells the personal data of residents. The letter comes after Motherboard revealed last year that the DMV was making $50 million annually from selling drivers’ information.

The news highlights how selling personal data is not limited to private companies, but some government entities follow similar practices too.

“What information is being sold, to whom it is sold, and what guardrails are associated with the sale remain unclear,” the letter, signed by congress members including Ted Lieu, Barbara Lee, and Mike Thompson, as well as California Assembly members Kevin Mullin and Mark Stone, reads.

Specifically, the letter asks what types of organizations has the DMV disclosed drivers’ data to in the past three years. Motherboard has previously reported on how other DMVs around the country sold such information to private investigators, including those hired to spy on suspected cheating spouses. In an earlier email to Motherboard, the California DMV said data requesters may include insurance companies, vehicle manufacturers, and prospective employers.

The information sold in general by DMVs includes names, physical addresses, and car registration information. Multiple other DMVs previously confirmed they have cut-off access to some clients after they abused the data.

On Wednesday, the California DMV said in an emailed statement, “The DMV does not sell driver information for marketing purposes or to generate revenue outside of the cost of administering its requester program—which only provides certain driver and vehicle related information as statutorily required.”

“The DMV takes its obligation to protect personal information very seriously. Information is only released according to California law, and the DMV continues to review its release practices to ensure information is only released to authorized persons/entities and only for authorized purposes. For example, if a car manufacturer is required to send a recall notice to thousands of owners of a particular model of car, the DMV may provide the car manufacturer with information on California owners of this particular model through this program,” the statement added.

After Motherboard’s earlier investigation into the sale of DMV data to private investigators, senators criticized the practice. Bernie Sanders more specifically said that DMVs should not profit from selling such data.

“In today’s ever-increasing digital world, our private information is too often stolen, abused, used for profit or grossly mishandled,” the new letter from lawmakers reads. “It’s critical that the custodians of the personal information of Americans—from corporations to government agencies—be held to high standards of data protection in order to restore the right of privacy in our country.”

Source: Lawmakers Ask California DMV How It Makes $50 Million a Year Selling Drivers’ Data

Private equity wants to own your DNA – Blackstone buys Ancestry at $250,- per person

The nation’s largest private equity firm is interested in buying your DNA data. The going rate: $261 per person. That appears to be what Blackstone, the $63 billion private equity giant, is willing to pay for genetic data controlled by one of the major companies gathering it from millions of customers.

Earlier this week, Blackstone announced it was paying $4.7 billion to acquire Ancestry.com, a pioneer in pop genetics that was launched in the 1990s to help people find out more about their family heritage.

Ancestry’s customers get an at-home DNA kit that they send back to the company. Ancestry then adds that DNA information to its database and sends its users a report about their likely family history. The company will also match you to other family members in its system, including distant cousins you may or may not want to hear from. And for up to $400 a year, you can continue to search Ancestry’s database to add to your knowledge of your family tree.

Ancestry has some information, mostly collected from public databases, on hundreds of millions of individuals. But its most valuable information is that of the people who have taken its DNA tests, which totals 18 million. And at Blackstone’s $4.7 billion purchase price that translates to just over $250 each.

[…]

Source: Private equity wants to own your DNA – CBS News

Whoops, our bad, we just may have ‘accidentally’ left Google Home devices recording your every word, sound, sorry

Your Google Home speaker may have been quietly recording sounds around your house without your permission or authorization, it was revealed this week.

The Chocolate Factory admitted it had accidentally turned on a feature that allowed its voice-controlled AI-based assistant to activate and snoop on its surroundings. Normally, the device only starts actively listening in and making a note of what it hears after it has heard wake words, such as “Ok, Google” or “Hey, Google,” for privacy reasons. Prior to waking, it’s constantly listening out for those words, but is not supposed to keep a record of what it hears.

Yet punters noticed their Google Homes had been recording random sounds, without any wake word uttered, when they started receiving notifications on their phone that showed the device had heard things like a smoke alarm beeping, or glass breaking in their homes – all without giving their approval.

Google said the feature had been accidentally turned on during a recent software update, and it has now been switched off, Protocol reported. It may be that this feature is or was intended to be used for home security at some point: imagine the assistant waking up whenever it hears a break in, for instance. Google just bought a $450m, or 6.6 per cent, stake in anti-burglary giant ADT.

Source: Whoops, our bad, we just may have ‘accidentally’ left Google Home devices recording your every word, sound, sorry • The Register