Clearview has scraped all social media sites illegally and vs TOS, has all your pictures in a massive database (who knows how secure this is?) and a face recognition AI. Is selling access to it to cops, and who knows who else.

What if a stranger could snap your picture on the sidewalk then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times.

The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it’s scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t currently available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.”

Source: Clearview app lets strangers find your name, info with snap of a photo, report says – CNET

Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.

One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Source: The Verge

So Clearview has you, even if it violates TOS. How to stop the next guy from getting you in FB – maybe.

It should come as little surprise that any content you offer to the web for public consumption has the potential to be scraped and misused by anyone clever enough to do it. And while that doesn’t make this weekend’s report from The New York Times any less damning, it’s a great reminder about how important it is to really go through the settings for your various social networks and limit how your content is, or can be, accessed by anyone.

I won’t get too deep into the Times’ report; it’s worth reading on its own, since it involves a company (Clearview AI) scraping more than three billion images from millions of websites, including Facebook, and creating a facial-recognition app that does a pretty solid job of identifying people using images from this massive database.

Even though Clearview’s scraping techniques technically violate the terms of service on a number of websites, that hasn’t stopped the company from acquiring images en masse. And it keeps whatever it finds, which means that turning all your online data private isn’t going to help if Clearview has already scanned and grabbed your photos.

Still, something is better than nothing. On Facebook, likely the largest stash of your images, you’re going to want to visit Settings > Privacy and look for the option described: “Do you want search engines outside of Facebook to link to your profile?”

Turn that off, and Clearview won’t be able to grab your images. That’s not the setting I would have expected to use, I confess, which makes me want to go through all of my social networks and rethink how the information I share with them flows out to the greater web.

Lock down your Facebook even more with these settings

Since we’re already here, it’s worth spending a few minutes wading through Facebook’s settings and making sure as much of your content is set to friends-only as possible. That includes changing “Who can see your future posts” to “friends,” using the “Limit Past Posts” option to change everything you’ve previously posted to friends-only, and making sure that only you can see your friends list—to prevent any potential scraping and linking that some third-party might attempt. Similarly, make sure only your friends (or friends of friends) can look you up via your email address or phone number. (You never know!)

You should then visit the Timeline and Tagging settings page and make a few more changes. That includes only allowing friends to see what other people post on your timeline, as well as posts you’re tagged in. And because I’m a bit sensitive about all the crap people tag me in on Facebook, I’d turn on the “Review” options, too. That won’t help your account from being scraped, but it’s a great way to exert more control over your timeline.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Finally, even though it also doesn’t prevent companies from scraping your account, pull up the Public postssection of Facebook’s settings page and limit who is allowed to follow you (if you desire). You should also restrict who can comment or like your public information, like posts or other details about your life you share openly on the service.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Once I fix Facebook, then what?

Here’s the annoying part. Were I you, I’d take an afternoon or evening and write out all the different places I typically share snippets of my life online. For most, maybe that’s probably a handful of social services: Facebook, Instagram, Twitter, YouTube, Flickr, et cetera.

Once you’ve created your list, I’d dig deep into the settings of each service and see what options you have, if any, for limiting the availability of your content. This might run contrary to how you use the service—if you’re trying to gain lots of Instagram followers, for example, locking your profile to “private” and requiring potential followers to request access might slow your attempts to become the next big Insta-star. However, it should also prevent anyone with a crafty scraping utility to mass-download your photos (and associate them with you, either through some fancy facial-recognition tech, or by linking them to your account).

Source: Change These Facebook Settings to Protect Your Photos From Facial Recognition Software

BlackVue dashcam shows anyone everywhere you are in real time and where you have been in the past

An app that is supposed to be a fun activity for dashcam users to broadcast their camera feeds and drives is actually allowing people to scrape and store the real-time location of drivers across the world.

BlackVue is a dashcam company with its own social network. With a small, internet-connected dashcam installed inside their vehicle, BlackVue users can receive alerts when their camera detects an unusual event such as someone colliding with their parked car. Customers can also allow others to tune into their camera’s feed, letting others “vicariously experience the excitement and pleasure of driving all over the world,” a message displayed inside the app reads.

Users are invited to upload footage of their BlackVue camera spotting people crashing into their cars or other mishaps with the #CaughtOnBlackVue hashtag. It’s kind of like Amazon’s Ring cameras, but for cars. BlackVue exhibited at CES earlier this month, and was previously featured on Innovations with Ed Begley Jr. on the History Channel.

But what BlackVue’s app doesn’t make clear is that it is possible to pull and store users’ GPS locations in real-time over days or even weeks. Motherboard was able to track the movements of some of BlackVue’s customers in the United States.

The news highlights privacy issues that some BlackVue customers or other dashcam users may not be aware of, and more generally the potential dangers of adding an internet and GPS enabled device into your vehicle. It also shows how developers may have one use case for an app, while people can discover others: although BlackVue wanted to create an entertaining app where users could tap into each others’ feeds, they may not have realized that it would be trivially easy to track its customers’ movements in granular detail, at scale, and over time.

BlackVue acts as another example of how surveillance products that are nominally intended to protect a user have been designed in such a way that can end up in a user being spied on, too.

“I don’t think people understand the risk,” Lee Heath, an information security professional and BlackVue user told Motherboard. “I knew about some of the cloud features which I wanted. You can have it automatically connect and upload when events happen. But I had no idea about the sharing” before receiving the device as a gift, he added.

Ordinarily, BlackVue lets anyone create an account and then view a map of cameras that are broadcasting their location and live feed. This broadcasting is not enabled by default, and users have to select the option to do so when setting up or configuring their own camera. Motherboard tuned into live feeds from users in Hong Kong, China, Russia, the U.K, Germany, and elsewhere. BlackVue spokesperson Jeremie Sinic told Motherboard in an email that the users on the map only represent a tiny fraction of BlackVue’s overall customers.

But the actual GPS data that drives the map is available and publicly accessible.

1579127170434-blackvue-user-gps
A screenshot of the location data of one BlackVue user that Motherboard tracked throughout New York. Motherboard has heavily obfuscated the data to protect the individual’s privacy. Image: Motherboard

By reverse engineering the iOS version of the BlackVue app, Motherboard was able to write scripts that pull the GPS location of BlackVue users over a week long period and store the coordinates and other information like the user’s unique identifier. One script could collect the location data of every BlackVue user who had mapping enabled on the eastern half of the United States every two minutes. Motherboard collected data on dozens of customers.

With that data, we were able to build a picture of several BlackVue users’ daily routines: one drove around Manhattan during the day, perhaps as a rideshare driver, before then leaving for Queens in the evening. Another BlackVue user regularly drove around Brooklyn, before parking on a specific block in Queens overnight. The user did this for several different nights, suggesting this may be where the owner lives or stores their vehicle. A third showed someone driving a truck all over South Carolina.

Some customers may use BlackVue as part of a fleet of vehicles; an employer wanting to keep tabs on their delivery trucks as they drive around, for instance. But BlackVue also markets its products to ordinary consumers who want to protect their cars.

1579127955288-blackvue-live-feed
A screenshot of Motherboard accessing someone’s public live feed as the user is driving in public away from their apparent home. Motherboard has redacted the user information to protect individual privacy. Image: Motherboard

BlackVue’s Sinic said that collecting GPS coordinates of multiple users over an extended period of time is not supposed to be possible.

“Our developers have updated the security measures following your report from yesterday that I forwarded,” Sinic said. After this, several of Motherboard’s web requests that previously provided user data stopped working.

In 2018 the company did make some privacy-related changes to its app, meaning users were not broadcasting their camera feeds by default.

“I think BlackVue has decent ideas as far as leaving off by default but allows people to put themselves at risk without understanding,” Heath, the BlackVue user, said.

Motherboard has deleted all of the data collected to preserve individuals’ privacy.

Source: This App Lets Us See Everywhere People Drive – VICE

Skype and Cortana audio listened in on by workers in China with ‘no security measures’

A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.

Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian.

While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”

Source: Skype audio graded by workers in China with ‘no security measures’ | Technology | The Guardian

Checkpeople, why is a 22GB database containing 56 million US folks’ aggregated personal details sitting on the open internet using a Chinese IP address?

A database containing the personal details of 56.25m US residents – from names and home addresses to phone numbers and ages – has been found on the public internet, served from a computer with a Chinese IP address, bizarrely enough.

The information silo appears to belong to Florida-based CheckPeople.com, which is a typical people-finder website: for a fee, you can enter someone’s name, and it will look up their current and past addresses, phone numbers, email addresses, names of relatives, and even criminal records in some cases, all presumably gathered from public records.

However, all of this information is not only sitting in one place for spammers, miscreants, and other netizens to download in bulk, but it’s being served from an IP address associated with Alibaba’s web hosting wing in Hangzhou, east China, for reasons unknown. It’s a perfect illustration that not only is this sort of personal information in circulation, but it’s also in the hands of foreign adversaries.

It just goes to show how haphazardly people’s privacy is treated these days.

A white-hat hacker operating under the handle Lynx discovered the trove online, and tipped off The Register. He told us he found the 22GB database exposed on the internet, including metadata that links the collection to CheckPeople.com. We have withheld further details of the security blunder for privacy protection reasons.

The repository’s contents are likely scraped from public records, though together provide rather detailed profiles on tens of millions of folks in America. Basically, CheckPeople.com has done the hard work of aggregating public personal records, and this exposed NoSQL database makes that info even easier to crawl and process.

Source: Why is a 22GB database containing 56 million US folks’ personal details sitting on the open internet using a Chinese IP address? Seriously, why? • The Register

Lawsuit against cinema for refusing cash – and thus slurping private data

Michiel Jonker from Arnhem has sued a cinema that has moved location and since then refuses to accept cash at the cash register. All payments have to be made by pin. Jonker feels that this forces visitors to allow the cinema to process personal data.

He tried something of the sort in 2018 which was turned down as the personal data authority in NL decided that no-one was required to accept cash as legal tender.

Jonker is now saying that it should be if the data can be used to profile his movie preferences afterwards.

Good luck to him, I agree that cash is legal tender and the move to a cash free society is a privacy nightmare and potentially disastrous – see Hong Kong, for example.

Source: Rechtszaak tegen weigering van contant geld door bioscoop – Emerce

Amazon fired four workers who secretly snooped on Ring doorbell camera footage

Amazon’s Ring home security camera biz says it has fired multiple employees caught covertly watching video feeds from customer devices.

The admission came in a letter [PDF] sent in response to questions raised by US Senators critical of Ring’s privacy practices.

Ring recounted how, on four separate occasions, workers were let go for overstepping their access privileges and poring over customer video files and other data inappropriately.

“Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” the gizmo flinger wrote.

“Although each of the individuals involved in these incidents was authorized to view video data, the attempted access to that data exceeded what was necessary for their job functions.

“In each instance, once Ring was made aware of the alleged conduct, Ring promptly investigated the incident, and after determining that the individual violated company policy, terminated the individual.”

This comes as Amazon attempts to justify its internal policies, particularly employee access to user video information for support and research-and-development purposes.

Source: Ring of fired: Amazon axes four workers who secretly snooped on netizens’ surveillance camera footage • The Register

DHS Plan to Collect DNA From Migrant Detainees Will Begin Soon – because centralised databases with personally sensitive data in them are a great idea. Just ask the Jews how useful they were during WWII

The Trump administration’s plan to collect DNA evidence from migrants detained in U.S. Customs and Borders Protection (CBP) and Immigration and Customs Enforcement (ICE) facilities will commence soon in the form of a 90-day pilot program in Detroit and Southwest Texas, CNN reported on Monday.

News of the plan first emerged in October, when the Department of Homeland Security told reporters that it wanted to collect DNA from migrants to detect “fraudulent family units,” including refugees applying for asylum at U.S. ports of entry. ICE started using DNA tests to screen asylum seekers at the border last year over similar concerns, claiming that the tests were designed to fight human traffickers. The tests will apply to those detained both temporarily and for longer periods of time, covering nearly all people held by immigration officials.

DHS announced the pilot program in a privacy assessment posted to its website on Monday. Per CNN, the pilot is a legal necessity before the agency revokes rules enacted in 2010 that exempt “migrants in who weren’t facing criminal charges or those pending deportation proceedings” from the DNA Fingerprint Act of 2005, which will apply the program nationally. The pilot will involve U.S. Border Patrol agents collecting DNA from individuals aged 14-79 who are arrested and processed, as well as customs officers collecting DNA from individuals subject to continued detention or further proceedings.

According to the privacy assessment, U.S. citizens and permanent residents “who are being arrested or facing criminal charges” may have DNA collected by CBP or ICE personnel. All collected DNA will be sent to the FBI and stored in its Combined DNA Index System (CODIS), a set of national genetic information databases that includes forensic data, missing persons, and convicts, where it would be held for as long as the government sees fit.

Those who refuse to submit to DNA testing could face class A misdemeanor charges, the DHS wrote.

DHS acknowledged that because it has to mail the DNA samples to the FBI for processing and comparison against CODIS entries, it is unlikely that agents will be able to use the DNA for “public safety or investigative purposes prior to either an individual’s removal to his or her home country, release into the interior of the United States, or transfer to another federal agency.” ACLU attorney Stephen Kang told the New York Times that DHS appeared to be creating “DNA bank of immigrants that have come through custody for no clear reason,” raising “a lot of very serious, practical concerns, I think, and real questions about coercion.”

The Times noted that last year, Border Patrol law enforcement directorate chief Brian Hastings wrote that even after policies and procedures were implemented, Border Patrol agents remained “not currently trained on DNA collection measures, health and safety precautions, or the appropriate handling of DNA samples for processing.”

U.S. immigration authorities held a record number of children over the fiscal year that ended in September 2019, with some 76,020 minors without their parents present detained. According to ICE, over 41,000 people were in DHS custody at the end of 2019 (in mid-2019, the number shot to over 55,000).

“That kind of mass collection alters the purpose of DNA collection from one of criminal investigation basically to population surveillance, which is basically contrary to our basic notions of a free, trusting, autonomous society,” ACLU Speech, Privacy, and Technology Project staff attorney Vera Eidelman told the Times last year.

Source: DHS Plan to Collect DNA From Migrant Detainees Will Begin Soon

Twelve Million Phones, One Dataset (no, not  your phone companies’), Zero Privacy – The New York Times

Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.

[Related: How to Track President Trump — Read more about the national security risks found in the data.]

After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.

One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.

If you could see the full trove, you might never use your phone the same way again.

A typical day at Grand Central Terminal
in New York City
Satellite imagery: Microsoft

The data reviewed by Times Opinion didn’t come from a telecom or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.

[…]

The companies that collect all this information on your movements justify their business on the basis of three claims: People consent to be tracked, the data is anonymous and the data is secure.

None of those claims hold up, based on the file we’ve obtained and our review of company practices.

Yes, the location data contains billions of data points with no identifiable information like names or email addresses. But it’s child’s play to connect real names to the dots that appear on the maps.

[…]

In most cases, ascertaining a home location and an office location was enough to identify a person. Consider your daily commute: Would any other smartphone travel directly between your house and your office every day?

Describing location data as anonymous is “a completely false claim” that has been debunked in multiple studies, Paul Ohm, a law professor and privacy researcher at the Georgetown University Law Center, told us. “Really precise, longitudinal geolocation information is absolutely impossible to anonymize.”

“D.N.A.,” he added, “is probably the only thing that’s harder to anonymize than precise geolocation information.”

[Work in the location tracking industry? Seen an abuse of data? We want to hear from you. Using a non-work phone or computer, contact us on a secure line at 440-295-5934, @charliewarzel on Wire or email Charlie Warzel and Stuart A. Thompson directly.]

Yet companies continue to claim that the data are anonymous. In marketing materials and at trade conferences, anonymity is a major selling point — key to allaying concerns over such invasive monitoring.

To evaluate the companies’ claims, we turned most of our attention to identifying people in positions of power. With the help of publicly available information, like home addresses, we easily identified and then tracked scores of notables. We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school. We watched high-powered lawyers (and their guests) as they traveled from private jets to vacation properties. We did not name any of the people we identified without their permission.

The data set is large enough that it surely points to scandal and crime but our purpose wasn’t to dig up dirt. We wanted to document the risk of underregulated surveillance.

Watching dots move across a map sometimes revealed hints of faltering marriages, evidence of drug addiction, records of visits to psychological facilities.

Connecting a sanitized ping to an actual human in time and place could feel like reading someone else’s diary.

[…]

The inauguration weekend yielded a trove of personal stories and experiences: elite attendees at presidential ceremonies, religious observers at church services, supporters assembling across the National Mall — all surveilled and recorded permanently in rigorous detail.

Protesters were tracked just as rigorously. After the pings of Trump supporters, basking in victory, vanished from the National Mall on Friday evening, they were replaced hours later by those of participants in the Women’s March, as a crowd of nearly half a million descended on the capital. Examining just a photo from the event, you might be hard-pressed to tie a face to a name. But in our data, pings at the protest connected to clear trails through the data, documenting the lives of protesters in the months before and after the protest, including where they lived and worked.

[…]

Inauguration Day weekend was marked by other protests — and riots. Hundreds of protesters, some in black hoods and masks, gathered north of the National Mall that Friday, eventually setting fire to a limousine near Franklin Square. The data documented those rioters, too. Filtering the data to that precise time and location led us to the doorsteps of some who were there. Police were present as well, many with faces obscured by riot gear. The data led us to the homes of at least two police officers who had been at the scene.

As revealing as our searches of Washington were, we were relying on just one slice of data, sourced from one company, focused on one city, covering less than one year. Location data companies collect orders of magnitude more information every day than the totality of what Times Opinion received.

Data firms also typically draw on other sources of information that we didn’t use. We lacked the mobile advertising IDs or other identifiers that advertisers often combine with demographic information like home ZIP codes, age, gender, even phone numbers and emails to create detailed audience profiles used in targeted advertising. When datasets are combined, privacy risks can be amplified. Whatever protections existed in the location dataset can crumble with the addition of only one or two other sources.

There are dozens of companies profiting off such data daily across the world — by collecting it directly from smartphones, creating new technology to better capture the data or creating audience profiles for targeted advertising.

The full collection of companies can feel dizzying, as it’s constantly changing and seems impossible to pin down. Many use technical and nuanced language that may be confusing to average smartphone users.

While many of them have been involved in the business of tracking us for years, the companies themselves are unfamiliar to most Americans. (Companies can work with data derived from GPS sensors, Bluetooth beacons and other sources. Not all companies in the location data business collect, buy, sell or work with granular location data.)

A Selection of Companies Working

in the Location Data Business

Sources: MightySignal, LUMA Partners and AppFigures.

Location data companies generally downplay the risks of collecting such revealing information at scale. Many also say they’re not very concerned about potential regulation or software updates that could make it more difficult to collect location data.

[…]

Does it really matter that your information isn’t actually anonymous? Location data companies argue that your data is safe — that it poses no real risk because it’s stored on guarded servers. This assurance has been undermined by the parade of publicly reported data breaches — to say nothing of breaches that don’t make headlines. In truth, sensitive information can be easily transferred or leaked, as evidenced by this very story.

We’re constantly shedding data, for example, by surfing the internet or making credit card purchases. But location data is different. Our precise locations are used fleetingly in the moment for a targeted ad or notification, but then repurposed indefinitely for much more profitable ends, like tying your purchases to billboard ads you drove past on the freeway. Many apps that use your location, like weather services, work perfectly well without your precise location — but collecting your location feeds a lucrative secondary business of analyzing, licensing and transferring that information to third parties.

The data contains simple information like date, latitude and longitude, making it easy to inspect, download and transfer. Note: Values are randomized to protect sources and device owners.

For many Americans, the only real risk they face from having their information exposed would be embarrassment or inconvenience. But for others, like survivors of abuse, the risks could be substantial. And who can say what practices or relationships any given individual might want to keep private, to withhold from friends, family, employers or the government? We found hundreds of pings in mosques and churches, abortion clinics, queer spaces and other sensitive areas.

In one case, we observed a change in the regular movements of a Microsoft engineer. He made a visit one Tuesday afternoon to the main Seattle campus of a Microsoft competitor, Amazon. The following month, he started a new job at Amazon. It took minutes to identify him as Ben Broili, a manager now for Amazon Prime Air, a drone delivery service.

“I can’t say I’m surprised,” Mr. Broili told us in early December. “But knowing that you all can get ahold of it and comb through and place me to see where I work and live — that’s weird.” That we could so easily discern that Mr. Broili was out on a job interview raises some obvious questions, like: Could the internal location surveillance of executives and employees become standard corporate practice?

[…]

If this kind of location data makes it easy to keep tabs on employees, it makes it just as simple to stalk celebrities. Their private conduct — even in the dead of night, in residences and far from paparazzi — could come under even closer scrutiny.

Reporters hoping to evade other forms of surveillance by meeting in person with a source might want to rethink that practice. Every major newsroom covered by the data contained dozens of pings; we easily traced one Washington Post journalist through Arlington, Va.

In other cases, there were detours to hotels and late-night visits to the homes of prominent people. One person, plucked from the data in Los Angeles nearly at random, was found traveling to and from roadside motels multiple times, for visits of only a few hours each time.

While these pointillist pings don’t in themselves reveal a complete picture, a lot can be gleaned by examining the date, time and length of time at each point.

Large data companies like Foursquare — perhaps the most familiar name in the location data business — say they don’t sell detailed location data like the kind reviewed for this story but rather use it to inform analysis, such as measuring whether you entered a store after seeing an ad on your mobile phone.

But a number of companies do sell the detailed data. Buyers are typically data brokers and advertising companies. But some of them have little to do with consumer advertising, including financial institutions, geospatial analysis companies and real estate investment firms that can process and analyze such large quantities of information. They might pay more than $1 million for a tranche of data, according to a former location data company employee who agreed to speak anonymously.

Location data is also collected and shared alongside a mobile advertising ID, a supposedly anonymous identifier about 30 digits long that allows advertisers and other businesses to tie activity together across apps. The ID is also used to combine location trails with other information like your name, home address, email, phone number or even an identifier tied to your Wi-Fi network.

The data can change hands in almost real time, so fast that your location could be transferred from your smartphone to the app’s servers and exported to third parties in milliseconds. This is how, for example, you might see an ad for a new car some time after walking through a dealership.

That data can then be resold, copied, pirated and abused. There’s no way you can ever retrieve it.

Location data is about far more than consumers seeing a few more relevant ads. This information provides critical intelligence for big businesses. The Weather Channel app’s parent company, for example, analyzed users’ location data for hedge funds, according to a lawsuit filed in Los Angeles this year that was triggered by Times reporting. And Foursquare received much attention in 2016 after using its data trove to predict that after an E. coli crisis, Chipotle’s sales would drop by 30 percent in the coming months. Its same-store sales ultimately fell 29.7 percent.

Much of the concern over location data has focused on telecom giants like Verizon and AT&T, which have been selling location data to third parties for years. Last year, Motherboard, Vice’s technology website, found that once the data was sold, it was being shared to help bounty hunters find specific cellphones in real time. The resulting scandal forced the telecom giants to pledge they would stop selling location movements to data brokers.

Yet no law prohibits them from doing so.

[…]

If this information is so sensitive, why is it collected in the first place?

For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Once they have the complete customer journey, companies know a lot about what we want, what we buy and what made us buy it. Other groups have begun to find ways to use it too. Political campaigns could analyze the interests and demographics of rally attendees and use that information to shape their messages to try to manipulate particular groups. Governments around the world could have a new tool to identify protestors.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself.

Source: Opinion | Twelve Million Phones, One Dataset, Zero Privacy – The New York Times

Private equity buys Lastpass owner LogMeIn – will they start monetising your logins?

Remote access, collaboration and password manager provider LogMeIn has been sold to a private equity outfit for $4.3bn.

A consortium led by private equity firm Francisco Partners (along with Evergreen, the PE arm of tech activist investor Elliott Management), will pay $86.05 in cash for each LogMeIn share – a 25 per cent premium on prices before talk about the takeover surfaced in September.

LogMeIn’s board of directors is in favour of the buy. Chief executive Bill Wagner said the deal recognised the value of the firm and would provide for: “both our core and growth assets”.

The sale should close in mid-2020, subject to the usual shareholder and regulatory hurdles. Logmein also has 45 days to look at alternative offers.

In 2018 LogMeIn made revenues of $1.2bn and profits of $446m.

The company runs a bunch of subsidiaries which offer collaboration software and web meetings products, virtual telephony services, remote technical support, and customer service bots as well as several identity and password manager products.

Logmein bought LastPass, which now claims 18.6 million users, for $110m in 2015. That purchase raised concerns about exactly how LastPass’s new owner would exploit the user data it held, and today’s news is unlikely to allay any of those fears.

The next year, LogMeIn merged with Citrix’s GoTo business, a year after its spinoff.

Source: Log us out: Private equity snaffles Lastpass owner LogMeIn • The Register

Camouflage made of quantum material could hide you from infrared cameras

Infrared cameras detect people and other objects by the heat they emit. Now, researchers have discovered the uncanny ability of a material to hide a target by masking its telltale heat properties.

The effect works for a range of temperatures that one day could include humans and vehicles, presenting a future asset to stealth technologies, the researchers say.

What makes the material special is its quantum nature—properties that are unexplainable by classical physics. The study, published today in the Proceedings of the National Academy of Sciences, is one step closer to unlocking the quantum material’s full potential.

The work was conducted by scientists and engineers at the University of Wisconsin-Madison, Harvard University, Purdue University, the Massachusetts Institute of Technology and Brookhaven National Laboratory.

Fooling is not new. Over the past few years, researchers have developed other materials made of graphene and black silicon that toy with , also hiding objects from cameras.

But how the quantum material in this study tricks an infrared camera is unique: it decouples an object’s from its thermal light radiation, which is counterintuitive based on what is known about how materials behave according to fundamental physics laws. The decoupling allows information about an object’s temperature to be hidden from an infrared camera.

The discovery does not violate any laws of physics, but suggests that these laws might be more flexible than conventionally thought.

Quantum phenomena tend to come with surprises. Several properties of the material, samarium oxide, have been a mystery since its discovery a few decades ago.

Shriram Ramanathan, a professor of materials engineering at Purdue, has investigated samarium nickel oxide for the past 10 years. Earlier this year, Ramanathan’s lab co-discovered that the material also has the counterintuitive ability to be a good insulator of electrical current in low-oxygen environments, rather than an unstable conductor, when oxygen is removed from its molecular structure.

Additionally, samarium nickel oxide is one of a few materials that can switch from an insulating phase to a conducting phase at high temperatures. University of Wisconsin-Madison researcher Mikhail Kats suspected that materials with this property might be capable of decoupling temperature and .

“There is a promise of engineering thermal radiation to control heat transfer and make it either easier or harder to identify and probe objects via infrared imaging,” said Kats, an associate professor of electrical and computer engineering.

Ramanathan’s lab created films of samarium nickel oxide on sapphire substrates to be compared with reference materials. Kats’ group measured spectroscopic emission and captured infrared images of each material as it was heated and cooled. Unlike other materials, samarium nickel oxide barely appeared hotter when it was heated up and maintained this effect between 105 and 135 degrees Celsius.

“Typically, when you heat or cool a material, the electrical resistance changes slowly. But for samarium nickel oxide, resistance changes in an unconventional manner from an insulating to a conducting state, which keeps its thermal light emission properties nearly the same for a certain temperature range,” Ramanathan said.

Because thermal light emission doesn’t change when temperature changes, that means the two are uncoupled over a 30-degree range.

According to the Kats, this study paves the way for not only concealing information from infrared cameras, but also for making new types of optics and even improving infrared cameras themselves.

“We are looking forward to exploring this material and related nickel oxides for infrared components such as tunable filters, optical limiters that protect sensors, and new sensitive light detectors,” Kats said.

More information: Temperature-independent thermal radiation, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1911244116 , https://www.pnas.org/content/early/2019/12/16/1911244116 , https://arxiv.org/abs/1902.00252

Source: Camouflage made of quantum material could hide you from infrared cameras

Your Modern Car Is A Privacy Nightmare

Next time you feel the need to justify to a family member, friend, or random acquaintance why you drive an old shitbox instead of a much more comfortable, modern vehicle, here’s another reason for you to trot out: your old shitbox, unlike every modern car, is not spying on you.

That’s the takeaway from a Washington Post investigation that hacked into a 2017 Chevy Volt to see what data the car hoovers up. The answer is: yikes.

From the Post:

Among the trove of data points were unique identifiers for my and Doug’s [the car’s owner] phones, and a detailed log of phone calls from the previous week. There was a long list of contacts, right down to people’s address, emails and even photos.

In our Chevy, we probably glimpsed just a fraction of what GM knows. We didn’t see what was uploaded to GM’s computers, because we couldn’t access the live OnStar cellular connection.

And it’s not just Chevy:

Mason has hacked into Fords that record locations once every few minutes, even when you don’t use the navigation system. He’s seen German cars with 300-gigabyte hard drives — five times as much as a basic iPhone 11. The Tesla Model 3 can collect video snippets from the car’s many cameras. Coming next: face data, used to personalize the vehicle and track driver attention.

Perhaps most troublingly, GM wouldn’t even share with the car’s owner what data about him it collected and shared.

And for what? Why are automakers collecting all this information about you? The short answer is they have no idea but are experimenting with the dumbest possible uses for it:

Automakers haven’t had a data reckoning yet, but they’re due for one. GM ran an experiment in which it tracked the radio music tastes of 90,000 volunteer drivers to look for patterns with where they traveled. According to the Detroit Free Press, GM told marketers that the data might help them persuade a country music fan who normally stopped at Tim Horton’s to go to McDonald’s instead.

That’s right, it wants to collect as much information about you as possible so it can take money from fast-food restaurants to target people who like a certain type of music, which is definitely, definitely a real indicator of what type of fast food restaurant you go to.

You should check out the entire investigation, as there are a lot of other fascinating bits in there, like what can be learned about a used infotainment system bought on eBay.

One point the article doesn’t mention, but that I think is important, is how badly this bodes for the electric future, since pretty much by definition every electric car must have at least some form of a computer. Unfortunately, making cars is hard and expensive so it’s unlikely a new privacy-focused electric automaker will pop up any time soon. I mean, hell, we barely even have privacy-focused phones.

Privacy or environmentally friendly: choose one. The future, it is trash.

Source: Your Modern Car Is A Privacy Nightmare

Remember Unrollme, the biz that helped you automatically ditch unwanted emails? Yeah, it was selling your data, even though it said it wouldn’t

If you were one of the millions of people that signed up with Unrollme to cut down on the emails from outfits you once bought a product from, we have some bad news for you: it has been storing and selling your data.

On Tuesday, America’s Federal Trade Commission finalized a settlement [PDF] with the New York City company, noting that it had deceived netizens when it promised not to “touch” people’s emails when they gave it permission to unsubscribe from, block, or otherwise get rid of marketing mailings they didn’t want.

It did touch them. In fact, it grabbed copies of e-receipts sent to customers after they’d bought something – often including someone’s name and physical address – and provided them to its parent company, Slice Technologies. Slice then used the information to compile reports that it sold to the very businesses people were trying to escape from.

Huge numbers of people signed up with Unrollme as a quick and easy way to cut down on the endless emails consumers get sent when they either buy something on the web, or provide their email address in-store or online. It can be time-consuming and tedious to click “unsubscribe” on emails as they come into your inbox, so Unrollme combined them in a single daily report with the ability to easily remove emails. This required granting Unrollme access to your inbox.

As the adage goes, if a product is free, you are the product. And so it was with Unrollme, which scooped up all that delicious data from people’s emails, and provided it to Slice, which was then stored and compiled into market research analytics products that it sold.

And before you get all told-you-so and free-market about it, consider this: Unrollme knew that a significant number of potential customers would drop out of the sign-up process as soon as they were informed that the company would require access to their email account, and so it wooed them by making a series of comforting statements about how it wouldn’t actually do anything with that access.

Examples?

Here’s one: “You need to authorize us to access your emails. Don’t worry, this is just to watch for those pesky newsletters, we’ll never touch your personal stuff.”

Source: Remember Unrollme, the biz that helped you automatically ditch unwanted emails? Yeah, it was selling your data • The Register

Ring’s Neighbors Data Let Us Map Amazon’s Home Surveillance Network

As reporters raced this summer to bring new details of Ring’s law enforcement contracts to light, the home security company, acquired last year by Amazon for a whopping $1 billion, strove to underscore the privacy it had pledged to provide users.

Even as its creeping objective of ensuring an ever-expanding network of home security devices eventually becomes indispensable to daily police work, Ring promised its customers would always have a choice in “what information, if any, they share with law enforcement.” While it quietly toiled to minimize what police officials could reveal about Ring’s police partnerships to the public, it vigorously reinforced its obligation to the privacy of its customers—and to the users of its crime-alert app, Neighbors.

However, a Gizmodo investigation, which began last month and ultimately revealed the potential locations of up to tens of thousands of Ring cameras, has cast new doubt on the effectiveness of the company’s privacy safeguards. It further offers one of the most “striking” and “disturbing” glimpses yet, privacy experts said, of Amazon’s privately run, omni-surveillance shroud that’s enveloping U.S. cities.

[…]

Gizmodo has acquired data over the past month connected to nearly 65,800 individual posts shared by users of the Neighbors app. The posts, which reach back 500 days from the point of collection, offer extraordinary insight into the proliferation of Ring video surveillance across American neighborhoods and raise important questions about the privacy trade-offs of a consumer-driven network of surveillance cameras controlled by one of the world’s most powerful corporations.

And not just for those whose faces have been recorded.

Examining the network traffic of the Neighbors app produced unexpected data, including hidden geographic coordinates that are connected to each post—latitude and longitude with up to six decimal points of precision, accurate enough to pinpoint roughly a square inch of ground.

Representing the locations of 440,000 Ring cameras collected from over 1,800 counties in the U.S.
Gizmodo found 5,016 unique Ring cameras while analyzing nine-square-miles of Los Angeles.

[…]

Guariglia and other surveillance experts told Gizmodo that the ubiquity of the devices gives rise to fears that pedestrians are being recorded strolling in and out of “sensitive buildings,” including certain medical clinics, law offices, and foreign consulates. “I think this is my big concern,” he said, seeing the maps.

Accordingly, Gizmodo located cameras in unnerving proximity to such sensitive buildings, including a clinic offering abortion services and a legal office that handles immigration and refugee cases.

It is possible to acquire Neighbors posts from anywhere in the country, in near-real-time, and sort them in any number of ways. Nearly 4,000 posts, for example, reference children, teens, or young adults; two purportedly involve people having sex; eight mention Immigration and Customs Enforcement; and more than 3,600 mention dogs, cats, coyotes, turkeys, and turtles.

While the race of individuals recorded is implicitly suggested in a variety of ways, Gizmodo found 519 explicit references to blackness and 319 to whiteness. A Ring spokesperson said the Neighbors content moderators strive to eliminate unessential references to skin color. Moderators are told to remove posts, they said, in which the sole identifier of a subject is that they’re “black” or “white.”

Ring’s guidelines instruct users: “Personal attributes like race, ethnicity, nationality, religion, sexual orientation, immigration status, sex, gender, age, disability, socioeconomic and veteran status, should never be factors when posting about an unknown person. This also means not referring to a person you are describing solely by their race or calling attention to other personal attributes not relevant to the matter being reported.”

“There’s no question, if most people were followed around 24/7 by a police officer or a private investigator it would bother them and they would complain and seek a restraining order,” said Jay Stanley, senior policy analyst at the American Civil Liberties Union. “If the same is being done technologically, silently and invisibly, that’s basically the functional equivalent.”

[…]

Companies like Ring have long argued—as Google did when it published millions of people’s faces on Street View in 2007—that pervasive street surveillance reveals, in essence, no more than what people have already made public; that there’s no difference between blanketing public spaces in internet-connected cameras and the human experience of walking or driving down the street.

But not everyone agrees.

“Persistence matters,” said Stanley, while acknowledging the ACLU’s long history of defending public photography. “I can go out and take a picture of you walking down the sidewalk on Main Street and publish it on the front of tomorrow’s newspaper,” he said. “That said, when you automate things, it makes it faster, cheaper, easier, and more widespread.”

Stanley and others devoted to studying the impacts of public surveillance envision a future in which Americans’ very perception of reality has become tainted by a kind of omnipresent observer effect. Children will grow up, it’s feared, equating the act of being outside with being recorded. The question is whether existing in this observed state will fundamentally alter the way people naturally behave in public spaces—and if so, how?

“It brings a pervasiveness and systematization that has significant potential effects on what it means to be a human being walking around your community,” Stanley said. “Effects we’ve never before experienced as a species, in all of our history.”

The Ring data has given Gizmodo the means to consider scenarios, no longer purely hypothetical, which exemplify what daily life is like under Amazon’s all-seeing eye. In the nation’s capital, for instance, walking the shortest route from one public charter school to a soccer field less than a mile away, 6th-12th graders are recorded by no fewer than 13 Ring cameras.

Gizmodo found that dozens of users in the same Washington, DC, area have used Neighbors to share videos of children. Thirty-six such posts describe mostly run-of-the-mill mischief—kids with “no values” ripping up parking tape, riding on their “dort-bikes” [sic] and taking “selfies.”

Ring’s guidelines state that users are supposed to respect “the privacy of others,” and not upload footage of “individuals or activities where a reasonable person would expect privacy.” Users are left to interpret this directive themselves, though Ring’s content moderators are supposedly actively combing through the posts and users can flag “inappropriate” posts for review.

Ángel Díaz, an attorney at the Brennan Center for Justice focusing on technology and policing, said the “sheer size and scope” of the data Ring amasses is what separates it from other forms of public photography.

[…]

Guariglia, who’s been researching police surveillance for a decade and holds a PhD in the subject, said he believes the hidden coordinates invalidate Ring’s claim that only users decide “what information, if any,” gets shared with police—whether they’ve yet to acquire it or not.

“I’ve never really bought that argument,” he said, adding that if they truly wanted, the police could “very easily figure out where all the Ring cameras are.”

The Guardian reported in August that Ring once shared maps with police depicting the locations of active Ring cameras. CNET reported last week, citing public documents, that police partnered with Ring had once been given access to “heat maps” that reflected the area where cameras were generally concentrated.

The privacy researcher who originally obtained the heat maps, Shreyas Gandlur, discovered that if police zoomed in far enough, circles appeared around individual cameras. However, Ring denied that the maps, which it said displayed “approximate device density,” and instructed police not to share publicly, accurately portrayed the locations of customers.

Source: Ring’s Neighbors Data Let Us Map Amazon’s Home Surveillance Network

Uninstall AVAST and AVG free anti-virus: they are massively slurping your data! Mozilla and Opera have removed them from their stores

Two browsers have yanked Avast and AVG online security extensions from their web stores after a report revealed that they were unnecessarily sucking up a ton of data about users’ browsing history.

Wladimir Palant, the creator behind Adblock Plus, initially surfaced the issue—which extends to Avast Online Security and Avast SafePrice as well as Avast-owned AVG Online Security and AVG SafePrice extensions—in a blog post back in October but this week flagged the issue to the companies themselves. In response, both Mozilla and Opera yanked the extensions from their stores. However, as of Wednesday, the extensions curiously remained in Google’s extensions store.

Using dev tools to examine network traffic, Palant was able to determine that the extensions were collecting an alarming amount of data about users’ browsing history and activity, including URLs, where you navigated from, whether the page was visited in the past, the version of browser you’re using, country code, and, if the Avast Antivirus is installed, the OS version of your device, among other data. Palant argued the data collection far exceeded what was necessary for the extensions to perform their basic jobs.

Source: Avast and AVG Plugins Reportedly Doing Some Shady Data Collection

All new cell phone users in China must now have their face scanned, as do all US citizens entering or leaving the US (as well as all non-US citizens)

Customers in China who buy SIM cards or register new mobile-phone services must have their faces scanned under a new law that came into effect yesterday. China’s government says the new rule, which was passed into law back in September, will “protect the legitimate rights and interest of citizens in cyberspace.”

A controversial step: It can be seen as part of an ongoing push by China’s government to make sure that people use services on the internet under their real names, thus helping to reduce fraud and boost cybersecurity. On the other hand, it also looks like part of a drive to make sure every member of the population can be surveilled.

How do Chinese people feel about it? It’s hard to say for sure, given how strictly the press and social media are regulated, but there are hints of growing unease over the use of facial recognition technology within the country. From the outside, there has been a lot of concern over the role the technology will play in the controversial social credit system, and how it’s been used to suppress Uighur Muslims in the western region of Xinjiang.

Source: All new cell phone users in China must now have their face scanned – MIT Technology Review

Homeland Security wants to expand facial recognition checks for travelers arriving to and departing from the U.S. to also include citizens, which had previously been exempt from the mandatory checks.

In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.

Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.

But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.

Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.

“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .

“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.

“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.

Citing a data breach of close to 100,000 license plate and traveler images in June, as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.

Source: DHS wants to expand airport face recognition scans to include US citizens

Bad news: ‘Unblockable’ web trackers emerge. Good news: Firefox with uBlock Origin can stop it. Chrome, not so much

Developers working on open-source ad-blocker uBlock Origin have uncovered a mechanism for tracking web browsers around the internet that defies today’s blocking techniques.

A method to block this so-called unblockable tracker has been developed by the team, though it only works in Firefox, leaving Chrome and possibly other browsers susceptible. This fix is now available to uBlock Origin users.

The tracker relies on DNS queries to get past browser defenses, so some form of domain-name look-up filtering could thwart this snooping. As far as netizens armed with just their browser and a regular old content-blocker plugin are concerned, this tracker can sneak by unnoticed. It can be potentially used by advertising and analytics networks to fingerprint netizens as they browse through the web, and silently build up profiles of their interests and keep count of pages they visit.

And, interestingly enough, it’s seemingly a result of an arms race between browser makers and ad-tech outfits as they battle over first and third-party cookies.

[…]

Many marketers, keen on maintaining their tracking and data collection capabilities, have turned to a technique called DNS delegation or DNS aliasing. It involves having a website publisher delegate a subdomain that the third-party analytics provider can use and aliasing it to an external server using a CNAME DNS record. The website and its external trackers thus seem to the browser to be coming from the same domain and are allowed to operate.

As Eulerian explains on its website, “The collection taking place under the name of the advertiser, and not under a third party, neither the ad blockers nor the browsers, interrupt the calls of tags.”

But wait, there’s more

Another marketing analytics biz, Wizaly, also advocates this technique to bypass Apple’s ITP 2.2 privacy protections.

As does Adobe, which explains on its website that one of the advantages of CNAME records for data collection is they “[allow] you to track visitors between a main landing domain and other domains in browsers that do not accept third-party cookies.”

In a conversation with The Register, Aeris said Criteo, an ad retargeting biz, appears to have deployed the technique to their customers recently, which suggests it will become more pervasive. Aeris added that DNS delegation clearly violates Europe’s GDPR, which “clearly states that ‘user-centric tracking’ requires consent, especially in the case of a third-party service usage.”

A recent statement from the Hamburg Commissioner for Data Protection and Freedom of Information in Germany notes that Google Analytics and similar services can only be used with consent.

“This exploit has been around for a long time, but is particularly useful now because if you can pretend to be a first-party cookie, then you avoid getting blocked by ad blockers, and the major browsers – Chrome, Safari, and Firefox,” said Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, in an email to The Register.

“This is an exploit, not an ‘oopsies,’ because it is a hidden and deliberate action to make a third-party cookie appear to be first-party to skirt privacy regulations and consumer choice. This is yet another example of the ‘badtech industrial complex’ protecting its river of gold.”

[…]

Two days ago, uBlock Origin developer Raymond Hill deployed a fix for Firefox users in uBlock Origin v1.24.1b0. Firefox supports an API to resolve the hostname of a DNS record, which can unmask CNAME shenanigans, thereby allowing developers to craft blocking behavior accordingly.

“uBO is now equipped to deal with third-party disguised as first-party as far as Firefox’s browser.dns allows it,” Hill wrote, adding that he assumes this can’t be fixed in Chrome at the moment because Chrome doesn’t have an equivalent DNS resolution API.

Aeris said, “For Chrome, there is no DNS API available, and so no easy way to detect this,” adding that Chrome under Manifest v3, a pending revision of Google’s extension platform, will break uBO. Hill, uBO’s creator, recently confirmed to The Register that’s still the case.

Even if Chrome were to implement a DNS resolution API, Google has made it clear it wants to maintain the ability to track people on the web and place cookies, for the sake of its ad business.

Apple’s answer to marketer angst over being denied analytic data by Safari has been to propose a privacy-preserving ad click attribution scheme that allows 64 different ad campaign identifiers – so marketers can see which worked.

Google’s alternative proposal, part of its “Privacy Sandbox” initiative, calls for an identifier field capable of storing 64 bits of data – considerably more than the integer 64.

As the Electronic Frontier Foundation has pointed out, this enables a range of numbers up to 18 quintillion, allowing advertisers to create unique IDs for every ad impression they serve, information that could then be associated with individual users.

Source: Bad news: ‘Unblockable’ web trackers emerge. Good news: Firefox with uBlock Origin can stop it. Chrome, not so much • The Register

Police can keep Amazon Ring camera video forever, and share with whomever they’d like, company tells senator

More than 600 police forces across the country have entered into partnerships with the camera giant allowing them to quickly request and download video captured by Ring’s motion-detecting, internet-connected cameras inside and around Americans’ homes.

The company says the videos can be a critical tool in helping law enforcement investigate crimes such as trespassing, burglary and package theft. But some lawmakers and privacy advocates say the systems could also empower more widespread police surveillance, fuel racial profiling and spark new neighborhood fears.

In September, following a report about Ring’s police partnerships in The Washington Post, Sen. Edward Markey, D-Mass., wrote to Amazon asking for details about how it protected the privacy and civil liberties of people caught on camera. Since that report, the number of law enforcement agencies working with Ring has increased nearly 50%.

In two responses from Amazon’s vice president of public policy, Brian Huseman, the company said it placed few restrictions on how police used or shared the videos offered up by homeowners. (Amazon CEO Jeff Bezos also owns The Washington Post.)

Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, Huseman said. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.

Markey said in a statement that Ring’s policies showed the company had failed to enact basic safeguards to protect Americans’ privacy.

“Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said.

“If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”

Ring, which Amazon bought last year for more than $800 million, did not immediately respond to requests for comment.

Source: Police can keep Ring camera video forever, and share with whomever they’d like, company tells senator – Stripes

Windows will go DNS over HTTPS – Take over your DNS queries, grab more of your browsing behaviour

we are making plans to adopt DNS over HTTPS (or DoH) in the Windows DNS client. As a platform, Windows Core Networking seeks to enable users to use whatever protocols they need, so we’re open to having other options such as DNS over TLS (DoT) in the future. For now, we’re prioritizing DoH support as the most likely to provide immediate value to everyone. For example, DoH allows us to reuse our existing HTTPS infrastructure.

For our first milestone, we’ll start with a simple change: use DoH for DNS servers Windows is already configured to use. There are now several public DNS servers that support DoH, and if a Windows user or device admin configures one of them today, Windows will just use classic DNS (without encryption) to that server. However, since these servers and their DoH configurations are well known, Windows can automatically upgrade to DoH while using the same server.

Source: Windows will improve user privacy with DNS over HTTPS – Microsoft Tech Community – 1014229

There is a lot of discussion about this – MS is putting it over as being a user privacy tool, but really it’s a datagrab going on by the tech giants.

House Antitrust Investigators Now Scrutinizing Google’s Plans to Add DNS Encryption to Chrome

 

Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information

A majority of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. It is such a common condition of modern life that roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government.

[…]

large shares of U.S. adults are not convinced they benefit from this system of widespread data gathering. Some 81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection. At the same time, a majority of Americans report being concerned about the way their data is being used by companies (79%) or the government (64%). Most also feel they have little or no control over how these entities use their personal information,

[…]

Fully 97% of Americans say they are ever asked to approve privacy policies, yet only about one-in-five adults overall say they always (9%) or often (13%) read a company’s privacy policy before agreeing to it. Some 38% of all adults maintain they sometimes read such policies, but 36% say they never read a company’s privacy policy before agreeing to it.

[…]

Among adults who say they ever read privacy policies before agreeing to their terms and conditions, only a minority – 22% – say they read them all the way through before agreeing to their terms and conditions.

There is also a general lack of understanding about data privacy laws among the general public: 63% of Americans say they understand very little or nothing at all about the laws and regulations that are currently in place to protect their data privacy.

Source: Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information | Pew Research Center

Health websites are sharing sensitive medical data with Google, Facebook, and Amazon

Popular health websites are sharing private, personal medical data with big tech companies, according to an investigation by the Financial Times. The data, including medical diagnoses, symptoms, prescriptions, and menstrual and fertility information, are being sold to companies like Google, Amazon, Facebook, and Oracle and smaller data brokers and advertising technology firms, like Scorecard and OpenX.

The investigation: The FT analyzed 100 health websites, including WebMD, Healthline, health insurance group Bupa, and parenting site Babycentre, and found that 79% of them dropped cookies on visitors, allowing them to be tracked by third-party companies around the internet. This was done without consent, making the practice illegal under European Union regulations. By far the most common destination for the data was Google’s advertising arm DoubleClick, which showed up in 78% of the sites the FT tested.

Responses: The FT piece contains a list of all the comments from the many companies involved. Google, for example, said that it has “strict policies preventing advertisers from using such data to target ads.” Facebook said it was conducting an investigation and would “take action” against websites “in violation of our terms.” And Amazon said: “We do not use the information from publisher websites to inform advertising audience segments.”

A window into a broken industry: This sort of rampant rule -breaking has been a dirty secret in the advertising technology industry, which is worth $200 billion globally, ever since EU countries adopted the General Data Protection Regulation in May 2018. A recent inquiry by the UK’s data regulator found that the sector is rife with illegal practices, as in this case where privacy policies did not adequately outline which data would be shared with third parties or what it would be used for. The onus is now on EU and UK authorities to act to put an end to them.

Source: Health websites are sharing sensitive medical data with Google, Facebook, and Amazon – MIT Technology Review

Facebook says government demands for user data are at a record high, most by US govt

The social media giant said the number of government demands for user data increased by 16% to 128,617 demands during the first half of this year compared to the second half of last year.

That’s the highest number of government demands it has received in any reporting period since it published its first transparency report in 2013.

The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.

But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

The report also said the social media giant had detected 67 disruptions of its services in 15 countries, compared to 53 disruptions in nine countries during the second half of last year.

And, the report said Facebook also pulled 11.6 million pieces of content, up from 5.8 million in the same period a year earlier, which Facebook said violated its policies on child nudity and sexual exploitation of children.

The social media giant also included Instagram in its report for the first time, including removing 1.68 million pieces of content during the second and third quarter of the year.

Source: Facebook says government demands for user data are at a record high | TechCrunch

Facebook bug shows camera activated in background during app use – the bug being that you could see the camera being activated

When you’re scrolling through Facebook’s app, the social network could be watching you back, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while they were looking at their feed.

The issue came to light through several posts on Twitter. Users noted that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network.

After people clicked on the video to full screen, returning it back to normal would create a bug in which Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background.

This was documented in multiple cases, with the earliest incident on Nov. 2.

It’s since been tweeted a couple other times, and CNET has also been able to replicate the issue.

Facebook didn’t immediately respond to a request for comment, but Guy Rosen, its vice president of integrity, tweeted Tuesday that this seems like a bug and the company’s looking into the matter.

Source: Facebook bug shows camera activated in background during app use – CNET

Google Reportedly Amassed Private Health Data on Millions of People Without Their Knowledge – a repeat of October 2019 and 2017 in the UK

The Wall Street Journal reported Monday that the tech giant partnered with Ascension, a non-profit and Catholic health systems company, on the program code-named “Project Nightingale.” According to the Journal, Google began its initiative with Ascension last year, and it involves everything from diagnoses, lab results, birth dates, patient names, and other personal health data—all of it reportedly handed over to Google without first notifying patients or doctors. The Journal said this amounts to data on millions of Americans spanning 21 states.

“By working in partnership with leading healthcare systems like Ascension, we hope to transform the delivery of healthcare through the power of the cloud, data analytics, machine learning, and modern productivity tools—ultimately improving outcomes, reducing costs, and saving lives,” Tariq Shaukat, president of Google Cloud, said in a statement.

Beyond the alarming reality that a tech company can collect data about people without their knowledge for its own uses, the Journal noted it’s legal under the Health Insurance Portability and Accountability Act (HIPAA). When reached for comment, representatives for both companies pointed Gizmodo to a press release about the relationship—which the Journal stated was published after its report—that states: “All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

Still, the Journal report raises concerns about whether the data handling is indeed as secure as both companies appear to think it is. Citing a source familiar with the matter as well as related documents, the paper said at least 150 employees at Google have access to a significant portion of the health data Ascension handed over on millions of people.

Google hasn’t exactly proven itself to be infallible when it comes to protecting user data. Remember when Google+ users had their data exposed and Google did nothing to alert in order to shield its own ass? Or when a Google contractor leaked more than a thousand Assistant recordings, and the company defended itself by claiming that most of its audio snippets aren’t reviewed by humans? Not exactly the kind of stuff you want to read about a company that may have your medical history on hand.

Source: Google Reportedly Amassed Private Health Data on Millions of People Without Their Knowledge

Google has been given the go-ahead to access five years’ worth of sensitive NHS patient data.

In a deal signed last month, the internet giant was handed hospital records of thousands of patients in England.

New documents show the data will include medical history, diagnoses, treatment dates and ethnic origin.

The news has raised concerns about the privacy of the data, which could now be harvested and commercialised.

It comes almost a year after Google absorbed the London-based AI lab DeepMind Health, a leading health technology developer.

DeepMind was bought by Google’s parent company Alphabet for £400 million ($520m) in 2014 and up until November 2018 had maintained independence.

But as of this year DeepMind transferred control of its health division to the parent company in California.

DeepMind had contracts to process medical record from three NHS trusts covering nine hospitals in England to develop its Streams mobile application.

From Google gets green light to access FIVE YEARS’ worth of sensitive patient data from NHS, sparking privacy fears

a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust – gives the clearest picture yet of what the company is doing and what sensitive data it now has access to.

The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.

“The data-sharing agreement gives Google access to information on millions of NHS patients”

DeepMind announced in February that it was working with the NHS, saying it was building an app called Streams to help hospital staff monitor patients with kidney disease. But the agreement suggests that it has plans for a lot more.

This is the first we’ve heard of DeepMind getting access to historical medical records, says Sam Smith, who runs health data privacy group MedConfidential. “This is not just about kidney function. They’re getting the full data.”

The agreement clearly states that Google cannot use the data in any other part of its business. The data itself will be stored in the UK by a third party contracted by Google, not in DeepMind’s offices. DeepMind is also obliged to delete its copy of the data when the agreement expires at the end of September 2017.

All data needed

Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

source: Revealed: Google AI has access to huge haul of NHS patient data (2017)

DHS expects to have detailed biometrics on 260 million people by 2022 – and will keep them in the cloud, where they will never be stolen or hacked *cough*

The US Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz.

That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.

A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other US agencies, state and local law enforcement, as well as foreign governments.

The first two stages of the HART system are being developed by US defense contractor Northrop Grumman, which won the $95 million contract in February 2018. DHS wasn’t immediately available to comment on its plans for its database.

[…]

Last month’s DHS presentation describes IDENT as an “operational biometric system for rapid identification and verification of subjects using fingerprints, iris, and face modalities.” The new HART database, it says, “builds upon the foundational functionality within IDENT,” to include voice data, DNA profiles, “scars, marks, and tattoos,” and the as-yet undefined “other biometric modalities as required.” EFF researchers caution some of the data will be “highly subjective,” such as information gleaned during “officer encounters” and analysis of people’s “relationship patterns.”

EFF worries that such tracking “will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate,” since such specific data points could be used to identify “political affiliations, religious activities, and familial and friendly relationships.”

[…]

EFF researchers said in a 2018 blog post that facial-recognition software, like what the DHS is using, is “frequently…inaccurate and unreliable.” DHS’s own tests found the systems “falsely rejected as many as 1 in 25 travelers,” according to EFF, which calls out potential foreign partners in countries such as the UK, where false-positives can reportedly reach as high as 98%. Women and people of color are misidentified at rates significantly higher than whites and men, and darker skin tones increase one’s chances of being improperly flagged.

“DHS is also partnering with airlines and other third parties to collect face images from travelers entering and leaving the US,” the EFF said. “When combined with data from other government agencies, these troubling collection practices will allow DHS to build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras.”

Source: DHS expects to have biometrics on 260 million people by 2022 — Quartz