Infrared cameras detect people and other objects by the heat they emit. Now, researchers have discovered the uncanny ability of a material to hide a target by masking its telltale heat properties.
The effect works for a range of temperatures that one day could include humans and vehicles, presenting a future asset to stealth technologies, the researchers say.
What makes the material special is its quantum nature—properties that are unexplainable by classical physics. The study, published today in the Proceedings of the National Academy of Sciences, is one step closer to unlocking the quantum material’s full potential.
The work was conducted by scientists and engineers at the University of Wisconsin-Madison, Harvard University, Purdue University, the Massachusetts Institute of Technology and Brookhaven National Laboratory.
Fooling infrared cameras is not new. Over the past few years, researchers have developed other materials made of graphene and black silicon that toy with electromagnetic radiation, also hiding objects from cameras.
But how the quantum material in this study tricks an infrared camera is unique: it decouples an object’s temperature from its thermal light radiation, which is counterintuitive based on what is known about how materials behave according to fundamental physics laws. The decoupling allows information about an object’s temperature to be hidden from an infrared camera.
The discovery does not violate any laws of physics, but suggests that these laws might be more flexible than conventionally thought.
Quantum phenomena tend to come with surprises. Several properties of the material, samarium nickel oxide, have been a mystery since its discovery a few decades ago.
Shriram Ramanathan, a professor of materials engineering at Purdue, has investigated samarium nickel oxide for the past 10 years. Earlier this year, Ramanathan’s lab co-discovered that the material also has the counterintuitive ability to be a good insulator of electrical current in low-oxygen environments, rather than an unstable conductor, when oxygen is removed from its molecular structure.
Additionally, samarium nickel oxide is one of a few materials that can switch from an insulating phase to a conducting phase at high temperatures. University of Wisconsin-Madison researcher Mikhail Kats suspected that materials with this property might be capable of decoupling temperature and thermal radiation.
“There is a promise of engineering thermal radiation to control heat transfer and make it either easier or harder to identify and probe objects via infrared imaging,” said Kats, an associate professor of electrical and computer engineering.
Ramanathan’s lab created films of samarium nickel oxide on sapphire substrates to be compared with reference materials. Kats’ group measured spectroscopic emission and captured infrared images of each material as it was heated and cooled. Unlike other materials, samarium nickel oxide barely appeared hotter when it was heated up and maintained this effect between 105 and 135 degrees Celsius.
“Typically, when you heat or cool a material, the electrical resistance changes slowly. But for samarium nickel oxide, resistance changes in an unconventional manner from an insulating to a conducting state, which keeps its thermal light emission properties nearly the same for a certain temperature range,” Ramanathan said.
Because thermal light emission doesn’t change when temperature changes, that means the two are uncoupled over a 30-degree range.
According to the Kats, this study paves the way for not only concealing information from infrared cameras, but also for making new types of optics and even improving infrared cameras themselves.
“We are looking forward to exploring this material and related nickel oxides for infrared camera components such as tunable filters, optical limiters that protect sensors, and new sensitive light detectors,” Kats said.
More information: Temperature-independent thermal radiation, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1911244116 , https://www.pnas.org/content/early/2019/12/16/1911244116 , https://arxiv.org/abs/1902.00252
Next time you feel the need to justify to a family member, friend, or random acquaintance why you drive an old shitbox instead of a much more comfortable, modern vehicle, here’s another reason for you to trot out: your old shitbox, unlike every modern car, is not spying on you.
That’s the takeaway from a Washington Post investigation that hacked into a 2017 Chevy Volt to see what data the car hoovers up. The answer is: yikes.
Among the trove of data points were unique identifiers for my and Doug’s [the car’s owner] phones, and a detailed log of phone calls from the previous week. There was a long list of contacts, right down to people’s address, emails and even photos.
…
In our Chevy, we probably glimpsed just a fraction of what GM knows. We didn’t see what was uploaded to GM’s computers, because we couldn’t access the live OnStar cellular connection.
And it’s not just Chevy:
Mason has hacked into Fords that record locations once every few minutes, even when you don’t use the navigation system. He’s seen German cars with 300-gigabyte hard drives — five times as much as a basic iPhone 11. The Tesla Model 3 can collect video snippets from the car’s many cameras. Coming next: face data, used to personalize the vehicle and track driver attention.
Perhaps most troublingly, GM wouldn’t even share with the car’s owner what data about him it collected and shared.
And for what? Why are automakers collecting all this information about you? The short answer is they have no idea but are experimenting with the dumbest possible uses for it:
Automakers haven’t had a data reckoning yet, but they’re due for one. GM ran an experiment in which it tracked the radio music tastes of 90,000 volunteer drivers to look for patterns with where they traveled. According to the Detroit Free Press, GM told marketers that the data might help them persuade a country music fan who normally stopped at Tim Horton’s to go to McDonald’s instead.
That’s right, it wants to collect as much information about you as possible so it can take money from fast-food restaurants to target people who like a certain type of music, which is definitely, definitely a real indicator of what type of fast food restaurant you go to.
You should check out the entire investigation, as there are a lot of other fascinating bits in there, like what can be learned about a used infotainment system bought on eBay.
One point the article doesn’t mention, but that I think is important, is how badly this bodes for the electric future, since pretty much by definition every electric car must have at least some form of a computer. Unfortunately, making cars is hard and expensive so it’s unlikely a new privacy-focused electric automaker will pop up any time soon. I mean, hell, we barely even have privacy-focused phones.
Privacy or environmentally friendly: choose one. The future, it is trash.
If you were one of the millions of people that signed up with Unrollme to cut down on the emails from outfits you once bought a product from, we have some bad news for you: it has been storing and selling your data.
On Tuesday, America’s Federal Trade Commission finalized a settlement [PDF] with the New York City company, noting that it had deceived netizens when it promised not to “touch” people’s emails when they gave it permission to unsubscribe from, block, or otherwise get rid of marketing mailings they didn’t want.
It did touch them. In fact, it grabbed copies of e-receipts sent to customers after they’d bought something – often including someone’s name and physical address – and provided them to its parent company, Slice Technologies. Slice then used the information to compile reports that it sold to the very businesses people were trying to escape from.
Huge numbers of people signed up with Unrollme as a quick and easy way to cut down on the endless emails consumers get sent when they either buy something on the web, or provide their email address in-store or online. It can be time-consuming and tedious to click “unsubscribe” on emails as they come into your inbox, so Unrollme combined them in a single daily report with the ability to easily remove emails. This required granting Unrollme access to your inbox.
As the adage goes, if a product is free, you are the product. And so it was with Unrollme, which scooped up all that delicious data from people’s emails, and provided it to Slice, which was then stored and compiled into market research analytics products that it sold.
And before you get all told-you-so and free-market about it, consider this: Unrollme knew that a significant number of potential customers would drop out of the sign-up process as soon as they were informed that the company would require access to their email account, and so it wooed them by making a series of comforting statements about how it wouldn’t actually do anything with that access.
Examples?
Here’s one: “You need to authorize us to access your emails. Don’t worry, this is just to watch for those pesky newsletters, we’ll never touch your personal stuff.”
As reporters raced this summer to bring new details of Ring’s law enforcement contracts to light, the home security company, acquired last year by Amazon for a whopping $1 billion, strove to underscore the privacy it had pledged to provide users.
Even as its creeping objective of ensuring an ever-expanding network of home security devices eventually becomes indispensable to daily police work, Ring promised its customers would always have a choice in “what information, if any, they share with law enforcement.” While it quietly toiled to minimize what police officials could reveal about Ring’s police partnerships to the public, it vigorously reinforced its obligation to the privacy of its customers—and to the users of its crime-alert app, Neighbors.
However, a Gizmodo investigation, which began last month and ultimately revealed the potential locations of up to tens of thousands of Ring cameras, has cast new doubt on the effectiveness of the company’s privacy safeguards. It further offers one of the most “striking” and “disturbing” glimpses yet, privacy experts said, of Amazon’s privately run, omni-surveillance shroud that’s enveloping U.S. cities.
[…]
Gizmodo has acquired data over the past month connected to nearly 65,800 individual posts shared by users of the Neighbors app. The posts, which reach back 500 days from the point of collection, offer extraordinary insight into the proliferation of Ring video surveillance across American neighborhoods and raise important questions about the privacy trade-offs of a consumer-driven network of surveillance cameras controlled by one of the world’s most powerful corporations.
And not just for those whose faces have been recorded.
Examining the network traffic of the Neighbors app produced unexpected data, including hidden geographic coordinates that are connected to each post—latitude and longitude with up to six decimal points of precision, accurate enough to pinpoint roughly a square inch of ground.
[…]
Guariglia and other surveillance experts told Gizmodo that the ubiquity of the devices gives rise to fears that pedestrians are being recorded strolling in and out of “sensitive buildings,” including certain medical clinics, law offices, and foreign consulates. “I think this is my big concern,” he said, seeing the maps.
Accordingly, Gizmodo located cameras in unnerving proximity to such sensitive buildings, including a clinic offering abortion services and a legal office that handles immigration and refugee cases.
It is possible to acquire Neighbors posts from anywhere in the country, in near-real-time, and sort them in any number of ways. Nearly 4,000 posts, for example, reference children, teens, or young adults; two purportedly involve people having sex; eight mention Immigration and Customs Enforcement; and more than 3,600 mention dogs, cats, coyotes, turkeys, and turtles.
While the race of individuals recorded is implicitly suggested in a variety of ways, Gizmodo found 519 explicit references to blackness and 319 to whiteness. A Ring spokesperson said the Neighbors content moderators strive to eliminate unessential references to skin color. Moderators are told to remove posts, they said, in which the sole identifier of a subject is that they’re “black” or “white.”
Ring’s guidelines instruct users: “Personal attributes like race, ethnicity, nationality, religion, sexual orientation, immigration status, sex, gender, age, disability, socioeconomic and veteran status, should never be factors when posting about an unknown person. This also means not referring to a person you are describing solely by their race or calling attention to other personal attributes not relevant to the matter being reported.”
“There’s no question, if most people were followed around 24/7 by a police officer or a private investigator it would bother them and they would complain and seek a restraining order,” said Jay Stanley, senior policy analyst at the American Civil Liberties Union. “If the same is being done technologically, silently and invisibly, that’s basically the functional equivalent.”
[…]
Companies like Ring have long argued—as Google did when it published millions of people’s faces on Street View in 2007—that pervasive street surveillance reveals, in essence, no more than what people have already made public; that there’s no difference between blanketing public spaces in internet-connected cameras and the human experience of walking or driving down the street.
But not everyone agrees.
“Persistence matters,” said Stanley, while acknowledging the ACLU’s long history of defending public photography. “I can go out and take a picture of you walking down the sidewalk on Main Street and publish it on the front of tomorrow’s newspaper,” he said. “That said, when you automate things, it makes it faster, cheaper, easier, and more widespread.”
Stanley and others devoted to studying the impacts of public surveillance envision a future in which Americans’ very perception of reality has become tainted by a kind of omnipresent observer effect. Children will grow up, it’s feared, equating the act of being outside with being recorded. The question is whether existing in this observed state will fundamentally alter the way people naturally behave in public spaces—and if so, how?
“It brings a pervasiveness and systematization that has significant potential effects on what it means to be a human being walking around your community,” Stanley said. “Effects we’ve never before experienced as a species, in all of our history.”
The Ring data has given Gizmodo the means to consider scenarios, no longer purely hypothetical, which exemplify what daily life is like under Amazon’s all-seeing eye. In the nation’s capital, for instance, walking the shortest route from one public charter school to a soccer field less than a mile away, 6th-12th graders are recorded by no fewer than 13 Ring cameras.
Gizmodo found that dozens of users in the same Washington, DC, area have used Neighbors to share videos of children. Thirty-six such posts describe mostly run-of-the-mill mischief—kids with “no values” ripping up parking tape, riding on their “dort-bikes” [sic] and taking “selfies.”
Ring’s guidelines state that users are supposed to respect “the privacy of others,” and not upload footage of “individuals or activities where a reasonable person would expect privacy.” Users are left to interpret this directive themselves, though Ring’s content moderators are supposedly actively combing through the posts and users can flag “inappropriate” posts for review.
Ángel Díaz, an attorney at the Brennan Center for Justice focusing on technology and policing, said the “sheer size and scope” of the data Ring amasses is what separates it from other forms of public photography.
[…]
Guariglia, who’s been researching police surveillance for a decade and holds a PhD in the subject, said he believes the hidden coordinates invalidate Ring’s claim that only users decide “what information, if any,” gets shared with police—whether they’ve yet to acquire it or not.
“I’ve never really bought that argument,” he said, adding that if they truly wanted, the police could “very easily figure out where all the Ring cameras are.”
The Guardian reported in August that Ring once shared maps with police depicting the locations of active Ring cameras. CNET reported last week, citing public documents, that police partnered with Ring had once been given access to “heat maps” that reflected the area where cameras were generally concentrated.
The privacy researcher who originally obtained the heat maps, Shreyas Gandlur, discovered that if police zoomed in far enough, circles appeared around individual cameras. However, Ring denied that the maps, which it said displayed “approximate device density,” and instructed police not to share publicly, accurately portrayed the locations of customers.
Two browsers have yanked Avast and AVG online security extensions from their web stores after a report revealed that they were unnecessarily sucking up a ton of data about users’ browsing history.
Wladimir Palant, the creator behind Adblock Plus, initially surfaced the issue—which extends to Avast Online Security and Avast SafePrice as well as Avast-owned AVG Online Security and AVG SafePrice extensions—in a blog post back in October but this week flagged the issue to the companies themselves. In response, both Mozilla and Opera yanked the extensions from their stores. However, as of Wednesday, the extensions curiously remained in Google’s extensions store.
Using dev tools to examine network traffic, Palant was able to determine that the extensions were collecting an alarming amount of data about users’ browsing history and activity, including URLs, where you navigated from, whether the page was visited in the past, the version of browser you’re using, country code, and, if the Avast Antivirus is installed, the OS version of your device, among other data. Palant argued the data collection far exceeded what was necessary for the extensions to perform their basic jobs.
Customers in China who buy SIM cards or register new mobile-phone services must have their faces scanned under a new law that came into effect yesterday. China’s government says the new rule, which was passed into law back in September, will “protect the legitimate rights and interest of citizens in cyberspace.”
A controversial step: It can be seen as part of an ongoing push by China’s government to make sure that people use services on the internet under their real names, thus helping to reduce fraud and boost cybersecurity. On the other hand, it also looks like part of a drive to make sure every member of the population can be surveilled.
How do Chinese people feel about it? It’s hard to say for sure, given how strictly the press and social media are regulated, but there are hints of growing unease over the use of facial recognition technology within the country. From the outside, there has been a lot of concern over the role the technology will play in the controversial social credit system, and how it’s been used to suppress Uighur Muslims in the western region of Xinjiang.
Homeland Security wants to expand facial recognition checks for travelers arriving to and departing from the U.S. to also include citizens, which had previously been exempt from the mandatory checks.
In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.
Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.
But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.
Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.
“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .
“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.
“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.
Citing a data breach of close to 100,000 license plate and traveler images in June, as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.
Developers working on open-source ad-blocker uBlock Origin have uncovered a mechanism for tracking web browsers around the internet that defies today’s blocking techniques.
A method to block this so-called unblockable tracker has been developed by the team, though it only works in Firefox, leaving Chrome and possibly other browsers susceptible. This fix is now available to uBlock Origin users.
The tracker relies on DNS queries to get past browser defenses, so some form of domain-name look-up filtering could thwart this snooping. As far as netizens armed with just their browser and a regular old content-blocker plugin are concerned, this tracker can sneak by unnoticed. It can be potentially used by advertising and analytics networks to fingerprint netizens as they browse through the web, and silently build up profiles of their interests and keep count of pages they visit.
And, interestingly enough, it’s seemingly a result of an arms race between browser makers and ad-tech outfits as they battle over first and third-party cookies.
[…]
Many marketers, keen on maintaining their tracking and data collection capabilities, have turned to a technique called DNS delegation or DNS aliasing. It involves having a website publisher delegate a subdomain that the third-party analytics provider can use and aliasing it to an external server using a CNAME DNS record. The website and its external trackers thus seem to the browser to be coming from the same domain and are allowed to operate.
As Eulerian explains on its website, “The collection taking place under the name of the advertiser, and not under a third party, neither the ad blockers nor the browsers, interrupt the calls of tags.”
But wait, there’s more
Another marketing analytics biz, Wizaly, also advocates this technique to bypass Apple’s ITP 2.2 privacy protections.
As does Adobe, which explains on its website that one of the advantages of CNAME records for data collection is they “[allow] you to track visitors between a main landing domain and other domains in browsers that do not accept third-party cookies.”
In a conversation with The Register, Aeris said Criteo, an ad retargeting biz, appears to have deployed the technique to their customers recently, which suggests it will become more pervasive. Aeris added that DNS delegation clearly violates Europe’s GDPR, which “clearly states that ‘user-centric tracking’ requires consent, especially in the case of a third-party service usage.”
A recent statement from the Hamburg Commissioner for Data Protection and Freedom of Information in Germany notes that Google Analytics and similar services can only be used with consent.
“This exploit has been around for a long time, but is particularly useful now because if you can pretend to be a first-party cookie, then you avoid getting blocked by ad blockers, and the major browsers – Chrome, Safari, and Firefox,” said Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, in an email to The Register.
“This is an exploit, not an ‘oopsies,’ because it is a hidden and deliberate action to make a third-party cookie appear to be first-party to skirt privacy regulations and consumer choice. This is yet another example of the ‘badtech industrial complex’ protecting its river of gold.”
[…]
Two days ago, uBlock Origin developer Raymond Hill deployed a fix for Firefox users in uBlock Origin v1.24.1b0. Firefox supports an API to resolve the hostname of a DNS record, which can unmask CNAME shenanigans, thereby allowing developers to craft blocking behavior accordingly.
“uBO is now equipped to deal with third-party disguised as first-party as far as Firefox’s browser.dns allows it,” Hill wrote, adding that he assumes this can’t be fixed in Chrome at the moment because Chrome doesn’t have an equivalent DNS resolution API.
Aeris said, “For Chrome, there is no DNS API available, and so no easy way to detect this,” adding that Chrome under Manifest v3, a pending revision of Google’s extension platform, will break uBO. Hill, uBO’s creator, recently confirmed to The Register that’s still the case.
Even if Chrome were to implement a DNS resolution API, Google has made it clear it wants to maintain the ability to track people on the web and place cookies, for the sake of its ad business.
Apple’s answer to marketer angst over being denied analytic data by Safari has been to propose a privacy-preserving ad click attribution scheme that allows 64 different ad campaign identifiers – so marketers can see which worked.
Google’s alternative proposal, part of its “Privacy Sandbox” initiative, calls for an identifier field capable of storing 64 bits of data – considerably more than the integer 64.
As the Electronic Frontier Foundation has pointed out, this enables a range of numbers up to 18 quintillion, allowing advertisers to create unique IDs for every ad impression they serve, information that could then be associated with individual users.
More than 600 police forces across the country have entered into partnerships with the camera giant allowing them to quickly request and download video captured by Ring’s motion-detecting, internet-connected cameras inside and around Americans’ homes.
The company says the videos can be a critical tool in helping law enforcement investigate crimes such as trespassing, burglary and package theft. But some lawmakers and privacy advocates say the systems could also empower more widespread police surveillance, fuel racial profiling and spark new neighborhood fears.
In September, following a report about Ring’s police partnerships in The Washington Post, Sen. Edward Markey, D-Mass., wrote to Amazon asking for details about how it protected the privacy and civil liberties of people caught on camera. Since that report, the number of law enforcement agencies working with Ring has increased nearly 50%.
In two responses from Amazon’s vice president of public policy, Brian Huseman, the company said it placed few restrictions on how police used or shared the videos offered up by homeowners. (Amazon CEO Jeff Bezos also owns The Washington Post.)
Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, Huseman said. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.
Markey said in a statement that Ring’s policies showed the company had failed to enact basic safeguards to protect Americans’ privacy.
“Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said.
“If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”
Ring, which Amazon bought last year for more than $800 million, did not immediately respond to requests for comment.
we are making plans to adopt DNS over HTTPS (or DoH) in the Windows DNS client. As a platform, Windows Core Networking seeks to enable users to use whatever protocols they need, so we’re open to having other options such as DNS over TLS (DoT) in the future. For now, we’re prioritizing DoH support as the most likely to provide immediate value to everyone. For example, DoH allows us to reuse our existing HTTPS infrastructure.
For our first milestone, we’ll start with a simple change: use DoH for DNS servers Windows is already configured to use. There are now several public DNS servers that support DoH, and if a Windows user or device admin configures one of them today, Windows will just use classic DNS (without encryption) to that server. However, since these servers and their DoH configurations are well known, Windows can automatically upgrade to DoH while using the same server.
A majority of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. It is such a common condition of modern life that roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government.
[…]
large shares of U.S. adults are not convinced they benefit from this system of widespread data gathering. Some 81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection. At the same time, a majority of Americans report being concerned about the way their data is being used by companies (79%) or the government (64%). Most also feel they have little or no control over how these entities use their personal information,
[…]
Fully 97% of Americans say they are ever asked to approve privacy policies, yet only about one-in-five adults overall say they always (9%) or often (13%) read a company’s privacy policy before agreeing to it. Some 38% of all adults maintain they sometimes read such policies, but 36% say they never read a company’s privacy policy before agreeing to it.
[…]
Among adults who say they ever read privacy policies before agreeing to their terms and conditions, only a minority – 22% – say they read them all the way through before agreeing to their terms and conditions.
There is also a general lack of understanding about data privacy laws among the general public: 63% of Americans say they understand very little or nothing at all about the laws and regulations that are currently in place to protect their data privacy.
Popular health websites are sharing private, personal medical data with big tech companies, according to an investigation by the Financial Times. The data, including medical diagnoses, symptoms, prescriptions, and menstrual and fertility information, are being sold to companies like Google, Amazon, Facebook, and Oracle and smaller data brokers and advertising technology firms, like Scorecard and OpenX.
The investigation: The FT analyzed 100 health websites, including WebMD, Healthline, health insurance group Bupa, and parenting site Babycentre, and found that 79% of them dropped cookies on visitors, allowing them to be tracked by third-party companies around the internet. This was done without consent, making the practice illegal under European Union regulations. By far the most common destination for the data was Google’s advertising arm DoubleClick, which showed up in 78% of the sites the FT tested.
Responses: The FT piece contains a list of all the comments from the many companies involved. Google, for example, said that it has “strict policies preventing advertisers from using such data to target ads.” Facebook said it was conducting an investigation and would “take action” against websites “in violation of our terms.” And Amazon said: “We do not use the information from publisher websites to inform advertising audience segments.”
A window into a broken industry: This sort of rampant rule -breaking has been a dirty secret in the advertising technology industry, which is worth $200 billion globally, ever since EU countries adopted the General Data Protection Regulation in May 2018. A recent inquiry by the UK’s data regulator found that the sector is rife with illegal practices, as in this case where privacy policies did not adequately outline which data would be shared with third parties or what it would be used for. The onus is now on EU and UK authorities to act to put an end to them.
The social media giant said the number of government demands for user data increased by 16% to 128,617 demands during the first half of this year compared to the second half of last year.
That’s the highest number of government demands it has received in any reporting period since it published its first transparency report in 2013.
The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.
But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.
The report also said the social media giant had detected 67 disruptions of its services in 15 countries, compared to 53 disruptions in nine countries during the second half of last year.
And, the report said Facebook also pulled 11.6 million pieces of content, up from 5.8 million in the same period a year earlier, which Facebook said violated its policies on child nudity and sexual exploitation of children.
The social media giant also included Instagram in its report for the first time, including removing 1.68 million pieces of content during the second and third quarter of the year.
When you’re scrolling through Facebook’s app, the social network could be watching you back, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while they were looking at their feed.
The issue came to light through several posts on Twitter. Users noted that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network.
After people clicked on the video to full screen, returning it back to normal would create a bug in which Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background.
This was documented in multiple cases, with the earliest incident on Nov. 2.
It’s since been tweeted a couple other times, and CNET has also been able to replicate the issue.
The Wall Street Journal reported Monday that the tech giant partnered with Ascension, a non-profit and Catholic health systems company, on the program code-named “Project Nightingale.” According to the Journal, Google began its initiative with Ascension last year, and it involves everything from diagnoses, lab results, birth dates, patient names, and other personal health data—all of it reportedly handed over to Google without first notifying patients or doctors. The Journal said this amounts to data on millions of Americans spanning 21 states.
“By working in partnership with leading healthcare systems like Ascension, we hope to transform the delivery of healthcare through the power of the cloud, data analytics, machine learning, and modern productivity tools—ultimately improving outcomes, reducing costs, and saving lives,” Tariq Shaukat, president of Google Cloud, said in a statement.
Beyond the alarming reality that a tech company can collect data about people without their knowledge for its own uses, the Journal noted it’s legal under the Health Insurance Portability and Accountability Act (HIPAA). When reached for comment, representatives for both companies pointed Gizmodo to a press release about the relationship—which the Journal stated was published after its report—that states: “All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”
Still, the Journal report raises concerns about whether the data handling is indeed as secure as both companies appear to think it is. Citing a source familiar with the matter as well as related documents, the paper said at least 150 employees at Google have access to a significant portion of the health data Ascension handed over on millions of people.
Google hasn’t exactly proven itself to be infallible when it comes to protecting user data. Remember when Google+ users had their data exposed and Google did nothing to alert in order to shield its own ass? Or when a Google contractor leaked more than a thousand Assistant recordings, and the company defended itself by claiming that most of its audio snippets aren’t reviewed by humans? Not exactly the kind of stuff you want to read about a company that may have your medical history on hand.
The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.
“The data-sharing agreement gives Google access to information on millions of NHS patients”
DeepMind announced in February that it was working with the NHS, saying it was building an app called Streams to help hospital staff monitor patients with kidney disease. But the agreement suggests that it has plans for a lot more.
This is the first we’ve heard of DeepMind getting access to historical medical records, says Sam Smith, who runs health data privacy group MedConfidential. “This is not just about kidney function. They’re getting the full data.”
The agreement clearly states that Google cannot use the data in any other part of its business. The data itself will be stored in the UK by a third party contracted by Google, not in DeepMind’s offices. DeepMind is also obliged to delete its copy of the data when the agreement expires at the end of September 2017.
All data needed
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”
The US Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz.
That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.
A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other US agencies, state and local law enforcement, as well as foreign governments.
The first two stages of the HART system are being developed by US defense contractor Northrop Grumman, which won the $95 million contract in February 2018. DHS wasn’t immediately available to comment on its plans for its database.
[…]
Last month’s DHS presentation describes IDENT as an “operational biometric system for rapid identification and verification of subjects using fingerprints, iris, and face modalities.” The new HART database, it says, “builds upon the foundational functionality within IDENT,” to include voice data, DNA profiles, “scars, marks, and tattoos,” and the as-yet undefined “other biometric modalities as required.” EFF researchers caution some of the data will be “highly subjective,” such as information gleaned during “officer encounters” and analysis of people’s “relationship patterns.”
EFF worries that such tracking “will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate,” since such specific data points could be used to identify “political affiliations, religious activities, and familial and friendly relationships.”
[…]
EFF researchers said in a 2018 blog post that facial-recognition software, like what the DHS is using, is “frequently…inaccurate and unreliable.” DHS’s own tests found the systems “falsely rejected as many as 1 in 25 travelers,” according to EFF, which calls out potential foreign partners in countries such as the UK, where false-positives can reportedly reach as high as 98%. Women and people of color are misidentified at rates significantly higher than whites and men, and darker skin tones increase one’s chances of being improperly flagged.
“DHS is also partnering with airlines and other third parties to collect face images from travelers entering and leaving the US,” the EFF said. “When combined with data from other government agencies, these troubling collection practices will allow DHS to build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras.”
Facebook on Tuesday disclosed that as many as 100 software developers may have improperly accessed user data, including the names and profile pictures of people in specific groups on the social network.
The company recently discovered that some apps retained access to this type of user data despite making changes to its service in April 2018 to prevent this, Facebook said in a blog post. The company said it has removed this access and reached out to 100 developer partners who may have accessed the information. Facebook said that at least 11 developer partners accessed this type of data in the last 60 days.
“Although we’ve seen no evidence of abuse, we will ask them to delete any member data they may have retained and we will conduct audits to confirm that it has been deleted,” the company said in the blog post.
The company did not say how many users were affected.
Facebook has been restricting software developer access to its user data following reports in March 2018 that political consulting firm Cambridge Analytica had improperly accessed the data of 87 million Facebook users, potentially to influence the outcome of the 2016 U.S. presidential election.
Click “Activity controls” from the left-hand sidebar.
Scroll down to the data type you wish to manage, then select “Manage Activity.”
On this next page, click on “Choose how long to keep” under the calendar icon.
Select the auto-deletion time you wish (three or 18 months), or you can choose to delete your data manually.
Click “Next” to save your changes.
Repeat these steps for each of the types of data you want to be auto-deleted. For your Location History in particular, you’ll need to click on “Today” in the upper-left corner first, and then click on the gear icon in the lower-right corner of your screen. Then, select “Automatically delete Location History,” and pick a time.
The vast majority of technology, media and telecom (TMT) companies want to monetise customer data, but are concerned about regulations such as Europe’s GDPR, according to research from law firm Simmons & Simmons.
The outfit surveyed 350 global business leaders in the TMT sector to understand their approach to data commercialisation. It found that 78 per cent of companies have some form of data commercialisation in place but only 20 per cent have an overarching plan for its use.
Alex Brown, global head of TMT Sector at Simmons & Simmons, observed that the firm’s clients are increasingly seeking advice on the legal ways they can monetise data. He said that can either be for internal use, how to use insights into customer behaviour to improve services, or ways to sell anonymised data to third parties.
One example of data monetisation within the sector is Telefónica’s Smart Steps business, which uses “fully anonymised and aggregated mobile network data to measure and compare the number of people visiting an area at any time”.
That information is then sold on to businesses to provide insight into their customer base.
Brown said: “All mobile network operators know your location because the phone is talking to the network, so through that they know a lot about people’s movement. That aggregated data could be used by town planners, transport networks, retailers work out best place to site new store.”
However, he added: “There is a bit of a data paralysis at the moment. GDPR and what we’ve seen recently in terms of enforcement – albeit related to breaches – and the Google fine in France… has definitely dampened some innovation.”
Earlier this year France’s data protection watchdog fined Google €50m for breaching European Union online privacy rules, the biggest penalty levied against a US tech giant. It said Google lacked transparency and clarity in the way it informs users about its handling of personal data and failed to properly obtain their consent for personalised ads.
But Brown pointed out that as long as privacy policies are properly laid out and the data is fully anonymised, companies wanting to make money off data should not fall foul of GDPR.
A confidential Sidewalk Labs document from 2016 lays out the founding vision of the Google-affiliated development company, which included having the power to levy its own property taxes, track and predict people’s movements and control some public services.
The document, which The Globe and Mail has seen, also describes how people living in a Sidewalk community would interact with and have access to the space around them – an experience based, in part, on how much data they’re willing to share, and which could ultimately be used to reward people for “good behaviour.”
Known internally as the “yellow book,” the document was designed as a pitch book for the company, and predates Sidewalk’s relationship and formal agreements with Toronto by more than a year. Peppered with references to Disney theme parks and noted futurist Buckminster Fuller, it says Sidewalk intended to “overcome cynicism about the future.”
But the 437-page book documents how much private control of city services and city life Google parent company Alphabet Inc.’s leadership envisioned when it created the company,
[…]
“The ideas contained in this 2016 internal paper represent the result of a wide-ranging brainstorming process very early in the company’s history,” Sidewalk spokesperson Keerthana Rang said. “Many, if not most, of the ideas it contains were never under consideration for Toronto or discussed with Waterfront Toronto and governments. The ideas that we are actually proposing – which we believe will achieve a new model of inclusive urban growth that makes housing more affordable for families, creates new jobs for residents, and sets a new standard for a healthier planet – can all be found at sidewalktoronto.ca.”
[…]
To carry out its vision and planned services, the book states Sidewalk wanted to control its area much like Disney World does in Florida, where in the 1960s it “persuaded the legislature of the need for extraordinary exceptions.” This could include granting Sidewalk taxation powers. “Sidewalk will require tax and financing authority to finance and provide services, including the ability to impose, capture and reinvest property taxes,” the book said. The company would also create and control its own public services, including charter schools, special transit systems and a private road infrastructure.
Sidewalk’s early data-driven vision also extended to public safety and criminal justice.
The book mentions both the data-collection opportunities for police forces (Sidewalk notes it would ask for local policing powers similar to those granted to universities) and the possibility of “an alternative approach to jail,” using data from so-called “root-cause assessment tools.” This would guide officials in determining a response when someone is arrested, such as sending someone to a substance abuse centre. The overall criminal justice system and policing of serious crimes and emergencies would be “likely to remain within the purview of the host government’s police department,” however.
Data collection plays a central role throughout the book. Early on, the company notes that a Sidewalk neighbourhood would collect real-time position data “for all entities” – including people. The company would also collect a “historical record of where things have been” and “about where they are going.” Furthermore, unique data identifiers would be generated for “every person, business or object registered in the district,” helping devices communicate with each other.
There would be a quid pro quo to sharing more data with Sidewalk, however. The document describes a tiered level of services, where people willing to share data can access certain perks and privileges not available to others. Sidewalk visitors and residents would be “encouraged to add data about themselves and connect their accounts, either to take advantage of premium services like unlimited wireless connectivity or to make interactions in the district easier,” it says.
Shoshana Zuboff, the Harvard University professor emerita whose book The Age of Surveillance Capitalism investigates the way Alphabet and other big-tech companies are reshaping the world, called the document’s revelations “damning.” The community Alphabet sought to build when it launched Sidewalk Labs, she said, was like a “for-profit China” that would “use digital infrastructure to modify and direct social and political behaviour.”
While Sidewalk has since moved away from many of the details in its book, Prof. Zuboff contends that Alphabet tends to “say what needs be said to achieve commercial objectives, while specifically camouflaging their actual corporate strategy.”
[…]
hose choosing to remain anonymous would not be able to access all of the area’s services: Automated taxi services would not be available to anonymous users, and some merchants might be unable to accept cash, the book warns.
The document also describes reputation tools that would lead to a “new currency for community co-operation,” effectively establishing a social credit system. Sidewalk could use these tools to “hold people or businesses accountable” while rewarding good behaviour, such as by rewarding a business’s good customer service with an easier or cheaper renewal process on its licence.
This “accountability system based on personal identity” could also be used to make financial decisions.
“A borrower’s stellar record of past consumer behaviour could make a lender, for instance, more likely to back a risky transaction, perhaps with the interest rates influenced by digital reputation ratings,” it says.
The company wrote that it would own many of the sensors it deployed in the community, foreshadowing a battle over data control that has loomed over the Toronto project.
Facebook has ended its appeal against the UK Information Commissioner’s Office and will pay the outstanding £500,000 fine for breaches of data protection law relating to the Cambridge Analytica scandal.
Prior to today’s announcement, the social network had been appealing against the fine, alleging bias and requesting access to ICO documents related to the regulator’s decision making. The ICO, in turn, was appealing a decision that it should hand over these documents.
The issue for the watchdog was the misuse of UK citizens’ Facebook profile information, specifically the harvesting and subsequent sale of data scraped from their profiles to Cambridge Analytica, the controversial British consulting firm used by US prez Donald Trump’s election campaign.
The app that collected the data was “thisisyourdigitallife”, created by Cambridge developer Aleksandr Kogan. It hoovered up Facebook users’ profiles, dates of birth, current city, photos in which those users were tagged, pages they had liked, posts on their timeline, friends’ lists, email addresses and the content of Facebook messages. The data was then processed in order to create a personality profile of the user.
“Given the way our platform worked at the time,” Zuck has said, “this meant Kogan was able to access tens of millions of their friends’ data”. Facebook has always claimed it learned of the data misuse from news reports, though this has been disputed.
Both sides will now end the legal fight and Facebook will pay the ICO a fine but make no admission of liability or guilt. The money is not kept by the data protection watchdog but goes to the Treasury consolidated fund and both sides will pay their own costs. The ICO spent an eye-watering £2.5m on the Facebook probe.
VP of product Scott Williamson announced on 10 October that “to make GitLab better faster, we need more data on how users are using GitLab”.
GitLab is a web application that runs on Linux, with options for self-hosting or using the company’s cloud service. It is open source, with both free and licensed editions.
Williamson said that while nothing was changing with the free self-hosted Community Edition, the hosted and licensed products would all now “include additional JavaScript snippets (both open source and proprietary) that will interact with both GitLab and possibly third-party SaaS telemetry services (we will be using Pendo)”. The only opt-out was to be support for the Do Not Track browser mechanism.
GitLab customers and even some staff were not pleased. For example, Yorick Peterse, a GitLab staff developer, said telemetry should be opt-in and that the requisite update to the terms of service would break some API usage (because bots do not know how to accept terms of service), adding: “We have plenty of customers who would not be able to use GitLab if it starts tracking data for on-premises installations.”
There is more background in the issue here, which concerns adding the identity of the user to the Snowplow analytics service used by GitLab.
“This effectively changes our Snowplow integration from being an anonymous aggregated thing to a thing that tracks user interaction,” engineering manager Lukas Eipert said back in July. “Ethically, I have problems with this and legally this could have a big impact privacy wise (GDPR). I hereby declare my highest degree of objection to this change that I can humanly express.”
On the other hand, GitLab CFO Paul Machle said: “This should not be an opt in or an opt out. It is a condition of using our product. There is an acceptance of terms and the use of this data should be included in that.”
On 23 October, an email was sent to GitLab customers announcing the changes.
Yesterday, however, CEO Sid Sijbrandij put the plans on hold, saying: “Based on considerable feedback from our customers, users, and the broader community, we reversed course the next day and removed those changes before they went into effect. Further, GitLab will commit to not implementing telemetry in our products that sends usage data to a third-party product analytics service.” Sijbrandij also promised a review of what went wrong. “We will put together a new proposal for improving the user experience and share it for feedback,” he said.
Despite this embarrassing backtrack, the incident has demonstrated that GitLab does indeed have an open process, with more internal discussion on view than would be the case with most companies. Nevertheless, the fact that GitLab came so close to using personally identifiable tracking without specific opt-in has tarnished its efforts to appear more community-driven than alternatives like Microsoft-owned GitHub. ®
Google’s Senior Vice President of Devices & Services, Rick Osterloh, broke the news on the official Google blog, saying:
Over the years, Google has made progress with partners in this space with Wear OS and Google Fit, but we see an opportunity to invest even more in Wear OS as well as introduce Made by Google wearable devices into the market. Fitbit has been a true pioneer in the industry and has created engaging products, experiences and a vibrant community of users. By working closely with Fitbit’s team of experts, and bringing together the best AI, software and hardware, we can help spur innovation in wearables and build products to benefit even more people around the world.
Earlier this week, on October 28, a report from Reuters surfaced to indicate that Google was in a bid to purchase Fitbit. It’s a big move, but it’s also one that makes good sense.
Google’s Wear OS wearable platform has been in something of a rut for the last few years. The company introduced the Android Wear to Wear OS rebrand in 2018 to revitalize its branding/image, but the hardware offerings have still been pretty ho-hum. Third-party watches like the Fossil Gen 5 have proven to be quite good, but without a proper “Made by Google” smartwatch and other major players, such as Samsung, ignoring the platform, it’s been left to just sort of exist.
The UK government could use facial recognition to verify the age of Brits online “so long as there is an appropriate concern for privacy,” junior minister for Digital, Culture, Media and Sport Matt Warman said.
The minister was responding to an urgent Parliamentary question directed to Culture Secretary Nicky Morgan about the future of Blighty’s online age-verification system, following her announcement this week that the controversial project had been dropped. He indicated the government is still keen to shield kids from adult material online, one way or another.
“In many ways, this is a technology problem that requires a technology solution,” Warman told the House of Commons on Thursday.
“People have talked about whether facial recognition could be used to verify age, so long as there is an appropriate concern for privacy. All of these are things I hope we will be able to wrap up in the new approach, because they will deliver better results for consumers – child or adult alike.”
The government also managed to spend £2.2m on the aforementioned-and-now-shelved proposal to introduce age-verification checks on netizens viewing online pornography, Warman admitted in his response.
For years I’ve gone back and forth over the practice of obscuring license plates on photos on the internet. License plates are already publicly-viewable things, so what’s the point in obscuring them, right? Well, now I think there actually is a good reason to obscure your license plates in photos because it appears that Google and Facebook are actually reading the plates in photos, and then making the actual license plate alphanumeric sequence searchable. I tested it. It works.
Starting with Google, the way this works is to search for the license plate number using Google Images. That’s it.
In my testing, I started with my own cars that I know have had images of their license plates in Jalopnik articles. For my Nissan Pao, a search of my license plate number brings up an image of my car, from one of my articles, as the first result:
It’s worth noting that the image search results aren’t even trying to differentiate the search term as a license plate; the number sequence has just been tagged to the photo automatically after whatever hidden Google OCR system reads the license plate. This can mean that someone searching a similar sequence of characters could likely end up with a result for your car if enough of those characters match your license plate.
[…]
I just checked a test I did on Facebook earlier today to see if they’re reading and tagging license plates, and, yep, it appears they are:
So, people can type your license plate into Facebook and, if it’s visible in any of your photos, it seems like it’ll show up! Great for you budding stalkers out there!
The takeaway here is that you should just assume your license plate is known and tagged to pictures of your car. Even if you obscure your plate in every image you yourself post, there’s no way to know what images your car and its license plate may be in the background of, meaning if it’s not searchable yet, it likely will be.
I suppose the positive side is that if you see a hit and run or someone’s blocking you in, it’s a lot easier to find out who’s being the jerk. On the negative side, it’s just a reminder that privacy in so many ways is eroding away, and there’s damn little we can do about it.
Today, when you use Wizards Unite or Pokémon Go or any of Niantic’s other apps, your every move is getting documented and stored—up to 13 times a minute, according to the results of a Kotaku investigation. Even players who know that the apps record their location data are usually astonished once they look at just how much they’ve told Niantic about their lives through their footsteps.
For years, users of these technologists’ products—from Google Street View to Pokémon Go—have been questioning how far they’re going with users’ information and whether those users are adequately educated on what they’re giving up and with whom it’s shared. In the process, those technologists have made mistakes, both major and minor, with regards to user privacy. As Niantic summits the world of augmented reality, it’s engineering that future of that big-money field, too. Should what Niantic does with its treasure trove of valuable data remain shrouded in the darkness particular to up-and-coming Silicon Valley darlings, that opacity might become so normalized that users lose any expectation of knowing how they’re being profited from.
Niantic publicly describes itself as a gaming company with an outsized passion for getting gamers outside. Its games, from Ingress to Pokémon Go to Wizards Unite, encourage players to navigate and interact with the real world around them, whether it be tree-lined suburbs, big cities, local landmarks, the Eiffel Tower, strip malls, or statues in the town square. Niantic’s ever-evolving gaming platform closely resembles Google Maps, in part because Niantic spawned from just that.
[…]
At 2019’s GDC, Hanke showed a video titled “Hyper-Reality,” by the media artist Keiichi Matsuda. It’s a dystopian look at a future in which the entire world is slathered with virtual overlays, an assault on the senses that everyone must view through an AR headset if they want to participate in modern society. In the video, the protagonist’s entire field of vision is a spread of neon notifications, apps, and advertisements, all viewed from a seat at the back of a city bus. Their hands swipe across a game they’re playing in augmented reality, while in the background an ad for Starbucks Coffee indicates they won a coupon for a free cup. Push notifications in their periphery indicate three new messages and directions for where to exit the bus. Walking through the aisle, where digital “get off now!” signs indicate it’s their stop, and onto the street, the physical world is annotated with virtual information. The more tasks they accomplish, the more points they receive. The whole world is now one big game. It showed a definitively dystopian vision of a world in which the barriers between IRL and URL have been fully collapsed.
Hanke said that the video made him feel “stressed and nervous.” Calling it a work of “critical design,” he noted that it was meant to question this dystopian future for AR, “a world where you’re tracked everywhere you go, where giant companies know everything about you, your identity is constantly at stake, and the world itself is noisy, and busy and plastered with distractions.”
But when a path appeared in front of the video’s protagonist showing them where to walk, Hanke’s response was: “That looks helpful.”
“Some people would say AR is a bad thing because we’ve seen this vision of how bad it can be,” Hanke said. “The point I want to make to you all is, it doesn’t have to be that way.” He showed an image of the Ferry Building, the 120-year-old piece of classical revival architecture in San Francisco where the company is currently headquartered. Just like in the video, it was overlaid with augmented reality windows showing the building’s history, a public transit schedule, and tabs for nearby restaurants. Hanke described a world where people can better navigate public transit and understand their surroundings because of digital mapping initiatives like Niantic. He talked about the possibility of hologram tour guides in San Francisco, and how they’d rely on a digital map to navigate their surroundings, and about designing shared experiences of Pokémon games in a Pokémon-augmented world.
[…]
Since its 2016 release, Pokémon Go has netted over $2.3 billion. In it, players collect items from PokeStops—also real-life locations and landmarks—so they can catch and collect Pokémon, which spawn around them. Almost immediately, Pokémon Go sparked its own privacy controversy, also blamed on a bug, which involved users giving Niantic a huge number of permissions: contacts, location, storage, camera and, for iPhone users, full Google account access, which was not integral to gameplay. Minnesota senator Al Franken penned a strongly-worded letter to Niantic about it, expressing concern “about the extent to which Niantic may be unnecessarily collecting, using, and sharing a wide range of users’ personal information without their appropriate consent.” Niantic said that the “account creation process on iOS erroneously requests full access permission,” adding that Pokémon Go only got user ID and email address info.
[…]
Players give Wizards Unite permission to track their movement using a combination of GPS, Wi-Fi, and mobile cell tower triangulation. To understand the extent of this location data, Kotaku asked for data from European players who had all filed personal information requests to Niantic under the GDPR, the European digital privacy legislation designed to give EU citizens more control over their personal data. Niantic sent these players all the data it had on them, which the players then shared with Kotaku.
The files we received contained detailed information about the lives of these players: the number of calories they likely burned during a given session, the distance they traveled, the promotions they engaged with. Crucially, each request also contained a large file of timestamped location data, as latitudes and longitudes.
In total, Kotaku analyzed more than 25,000 location records voluntarily shared with us by 10 players of Niantic games. On average, we found that Niantic kept about three location records per minute of gameplay of Wizards Unite, nearly twice as many as it did with Pokémon Go. For one player, Niantic had at least one location record taken during nearly every hour of the day, suggesting that the game was collecting data and sharing it with Niantic even when the player was not playing.
When Kotaku first asked Niantic why Wizards Unite was collecting location data even while the game was not actively being played, its first response was that we must be mistaken, since the game, it said, did not collect data while backgrounded. After we provided Niantic with more information about that player, it got back to us a few days later to let us know that its engineering team “did identify a bug in the Android version of the client code that led it to continue to ping our servers intermittently when the app was still open but had been backgrounded.” The bug, Niantic said, has now been fixed.
Because the location data collected by Wizards Unite and sent to Niantic is so granular, sometimes up to 13 location records a minute, it is possible to discern individual patterns of user behavior as well as intimate details about a player’s life.
[…]
Niantic is far from the only company collecting this sort of data. Last year, the New York Times published an expose on how over 75 companies receive pinpoint-accurate, anonymous location data from phone apps on over 200 million devices. Sometimes, these companies tracked users’ locations over 14,000 times a day. The result was always the same: Even though users had signed away their location data to these companies by agreeing to their user agreements, a lot of the time, they generally had no idea that companies were taking such exhaustive notes on what kind of person they are, where they’d been, where they were likely to go next, and whether they’d buy something there.
That Niantic is yet another company that can infer this type of mundane personal information may not be, in itself, surprising. Credit card companies, email providers, cellular services, and a variety of data brokers all have access to your personal information in increasingly opaque ways. Remember when Target figured out that a high school girl was pregnant before her family did?
It’s important to note that the personal data that players requested from Niantic and voluntarily shared with Kotaku is, according to Niantic, not something that a third party could buy from them, or otherwise be allowed to see. “Niantic does not share individual player data with third party sponsored location partners,” a representative said, adding that it uses “additional mechanisms to process the data so that it cannot be connected to an individual.”
Niantic’s Kawai told Kotaku that the anonymized data that Niantic shares with third parties is only in the form of “aggregated stats,” such as “how many people have had access or went to those in-game locations and how many actions people take in those in-game locations, how many PokeStop spins to get items happened on that day and… what unique number of people went to that location.”
“We don’t go any further than that,” he said.
The idea that data can successfully be anonymized has long been a contentious one. In July, researchers at Imperial College London were able to accurately reidentify 99.98 percent of Americans in an “anonymized” dataset. And in 2018, a New York Times investigation found that, when provided raw anonymized location data, companies could identify individuals with or without their consent. In fact, according to experts, it can take just four timestamped location records to specifically identify an individual from a collection of latitudes and longitudes that they have visited.
[…]
Niantic makes a staggering amount of money off in-game microtransactions, a reported $1.8 billion in Pokémon Go’s first two years. It also makes money from sponsorships. By late 2017, there were over 35,000 sponsored PokeStops, which players visited over 500 million times. Hanke described foot traffic as the “holy grail of retail businesses” in a 2017 talk to the Mobile World Congress. 13,000 of the sponsored stops were Starbucks locations.
[…]
“We have always been transparent about this product and feel it is a much better experience for our players than the kind of video and text ads frequently deployed in other mobile games,” Hanke told Kotaku. He then shared a link to an Ad Age article announcing Pokémon Go’s sponsored locations and detailing its “cost per visit” business model.
Big-money tech companies rarely make money in just one or two ways, and often inconspicuously employ money-making strategies that may be less palatable to privacy-minded consumers. Mobile app companies are notorious for this. One 2017 Oxford study, for example, analyzed 1 million smartphone apps and determined that the median Google Play Store app can share users’ behavioral data with 10 third parties, while one in five can share it with over 20. “Freemium” mobile apps can earn big revenue from sharing data with advertisers—and it’s all completely opaque to users, as a Buzzfeed News report explained in 2018.
A graph illustrating the number of location records captured for one Harry Potter: Wizards Unite user per minute, over the span of a few hours.
Image: Kotaku
Advertising market research company Emarketer projected that advertisers will spend $29 billion on location-targeted advertising, also referred to as “geoconquesting,” this year. Marketers target and tailor ads for app users in a specific location in real-time, segment a potential audience for an ad by location, learn about consumers based on where they were before they bought something, and connect online ads to offline purchases using location data—another manifestation of “ubiquitous computing.” One of the biggest location-targeted ad companies, GroundTruth, taps data from 120 million unique monthly users to drive people to businesses like Taco Bell, where it recently took credit for 170,000 visits after a location-targeted ad campaign.
[…]
Niantic said it is not in the business of selling user location data. But it will send its users to you. Wizards Unite recently partnered with Simon Malls, which owns over 200 shopping centers, to add “multiple sponsored Inns and Fortresses” at each location, “giving players more XP and more spell energy than at any other non-sponsored location in the U.S.”
[…]
If the goal is to unite the physical with the digital, insights gleaned from how long users loiter outside a Coach store and how long they might look at a Coach Instagram ad could be massively useful to these waning mall brands. Uniting these worlds for a field trip around Tokyo is one thing; uniting them to consolidate digital and physical ad profiles is another.
“This is a hot topic in mall operation—tracking the motion of people within a mall, what stores they’re going to, how long they’re going,” said Ron Merriman, a theme park business strategist based in Shanghai (who, he noted after we contacted him for this story, happened to go to business school with Hanke). Merriman says that tracking users in malls, aquariums, and theme parks to optimize merchandising, user experiences, and ad targeting is becoming the norm where he lives in Asia. Retailers polled by Emarketer in late 2018 planned on investing more in proximity and location-based marketing than other emerging, hot-topic technologies like AI.