Chrome Is Scanning Files on Your Computer, and People Are Freaking Out

The browser you likely use to read this article scans practically all files on your Windows computer. And you probably had no idea until you read this. Don’t worry, you’re not the only one.

Last year, Google announced some upgrades to Chrome, by far the world’s most used browser—and the one security pros often recommend. The company promised to make internet surfing on Windows computers even “cleaner” and “safer ” adding what The Verge called “basic antivirus features.” What Google did was improve something called Chrome Cleanup Tool for Windows users, using software from cybersecurity and antivirus company ESET.

Tensions around the issue of digital privacy are understandably high following Facebook’s Cambridge Analytica scandal, but as far as we can tell there is no reason to worry here, and what Google is doing is above board.

In practice, Chome on Windows looks through your computer in search of malware that targets the Chrome browser itself using ESET’s antivirus engine. If it finds some suspected malware, it sends metadata of the file where the malware is stored, and some system information, to Google. Then, it asks you to for permission to remove the suspected malicious file. (You can opt-out of sending information to Google by deselecting the “Report details to Google” checkbox.)

A screenshot of the Chrome pop-up that appears if Chrome Cleanup Tool detects malware on your Windows computer.

Last week, Kelly Shortridge, who works at cybersecurity startup SecurityScorecard, noticed that Chrome was scanning files in the Documents folder of her Windows computer.

“In the current climate, it really shocked me that Google would so quietly roll out this feature without publicizing more detailed supporting documentation—even just to preemptively ease speculation,” Shortridge told me in an online chat. “Their intentions are clearly security-minded, but the lack of explicit consent and transparency seems to violate their own criteria of ‘user-friendly software’ that informs the policy for Chrome Cleanup [Tool].”

Her tweet got a lot of attention and caused other people in the infosec community—as well as average users such as me—to scratch their heads.

“Nobody likes surprises,” Haroon Meer, the founder at security consulting firm Thinkst, told me in an online chat. “When people fear a big brother, and tech behemoths going too far…a browser touching files it has no business to touch is going to set off alarm bells.”

Now, to be clear, this doesn’t mean Google can, for example, see photos you store on your windows machine. According to Google, the goal of Chrome Cleanup Tool is to make sure malware doesn’t mess up with Chrome on your computer by installing dangerous extensions, or putting ads where they’re not supposed to be.

As the head of Google Chrome security Justin Schuh explained on Twitter, the tool’s “sole purpose is to detect and remove unwanted software manipulating Chrome.” Moreover, he added, the tool only runs weekly, it only has normal user privileges (meaning it can’t go too deep into the system), is “sandboxed” (meaning its code is isolated from other programs), and users have to explicitly click on that box screenshotted above to remove the files and “cleanup.”

In other words, Chrome Cleanup Tool is less invasive than a regular “cloud” antivirus that scans your whole computer (including its more sensitive parts such as the kernel) and uploads some data to the antivirus company’s servers.

But as Johns Hopkins professor Matthew Green put it, most people “are just a little creeped out that Chrome started poking through their underwear drawer without asking.”

That’s the problem here: most users of an internet browser probably don’t expect it to scan and remove files on their computers.

Source: Chrome Is Scanning Files on Your Computer, and People Are Freaking Out – Motherboard

I really don’t think it is the job of the browser to scan your computer at all.

Grindr: Yeah, we shared your HIV status info with other companies – but we didn’t charge them! (oh and your GPS coords)

Hookup fixer Grindr is on the defensive after it shared sensitive information, including HIV status and physical location, of its app’s users with outside organizations.

The quickie booking facilitator on Monday admitted it passed, via HTTPS, people’s public profiles to third-party analytics companies to process on its behalf. That means, yes, the information was handed over in bulk, but, hey, at least it didn’t sell it!

“Grindr has never, nor will we ever sell personally identifiable user information – especially information regarding HIV status or last test date – to third parties or advertisers,” CTO Scott Chen said in a statement.

Rather than apologize, Grindr said its punters should have known better than to give it any details they didn’t want passed around to other companies. On the one hand, the data was scraped from the application’s public profiles, so, well, maybe people ought to calm down. It was all public anyway. On the other hand, perhaps people didn’t expect it to be handed over for analysis en masse.

“It’s important to remember that Grindr is a public forum,” Chen said. “We give users the option to post information about themselves including HIV status and last test date, and we make it clear in our privacy policy that if you choose to include this information in your profile, the information will also become public.”

This statement is in response to last week’s disclosure by security researchers on the ways the Grindr app shares user information with third-party advertisers and partners. Among the information found to be passed around by Grindr was the user’s HIV status, something Grindr allows members to list in their profiles.

The HIV status, along with last test date, sexual position preference, and GPS location were among the pieces of info Grindr shared via encrypted network connections with analytics companies Localytics and Apptimize.

The revelation drew sharp criticism of Grindr, with many slamming the upstart for sharing what many consider to be highly sensitive personal information with third-parties along with GPS coordinates.

Source: Grindr: Yeah, we shared your HIV status info with other companies – but we didn’t charge them! • The Register

Most of 2.2 billion Facebook users had their data scraped by externals – because it was easy to do

At this point, the social media company is just going for broke, telling the public it should just assume that “most” of the 2.2 billion Facebook users have probably had their public data scraped by “malicious actors.”

[…]

Meanwhile, reports have focused on a variety of issues that have popped up in just the last 24 hours. It’s hard to focus on what matters—and frankly, all of it seems to matter, so in turn, it ends up feeling like none of it does. This is the Trump PR playbook, and Facebook is running it perfectly. It’s the media version of too big to fail, call it too big to matter. Let us suggest that you just zero in on one detail from yesterday’s blog post about new restrictions on data access on the platform.

Mike Schroepfer, Facebook’s chief technology officer, explained that prior to yesterday, “people could enter another person’s phone number or email address into Facebook search to help find them.” This function would help you cut through all the John Smiths and locate the page of your John Smith. He gave the example of Bangladesh where the tool was used for 7 percent of all searches. Thing is, it was also useful to data-scrapers. Schroepfer wrote:

However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well.

The full meaning of that paragraph might not be readily apparent, but imagine you’re a hacker who bought a huge database of phone numbers on the dark web. Those numbers might have some use on their own, but they become way more useful for breaking into individual systems or committing fraud if you can attach more data to them. Facebook is saying that this kind of malicious actor would regularly take one of those numbers and use the platform to hunt down all publicly available data on its owner. This process, of course, could be automated and reap huge rewards with little effort. Suddenly, the hacker might have a user’s number, photos, marriage status, email address, birthday, location, pet names, and more—an excellent toolkit to do some damage.

In yesterday’s Q&A, Zuckerberg explained that Facebook did have some basic protections to prevent the sort of automation that makes this particularly convenient, but “we did see a number of folks who cycled through many thousands of IPs, hundreds of thousands of IP addresses to evade the rate-limiting system, and that wasn’t a problem we really had a solution to.” The ultimate solution was to shut the features down. As far as the impact goes, “I think the thing people should assume, given this is a feature that’s been available for a while—and a lot of people use it in the right way—but we’ve also seen some scraping, I would assume if you had that setting turned on, that someone at some point has accessed your public information in this way,” Zuckerberg said. Did you have that setting turned on? Ever? Given that Facebook says “most” accounts were affected, it’s safe to assume you did.

[…]

Mark Zuckerberg has known from the beginning that his creation was bad for privacy and security. Activists, the press, and tech experts have been saying it for years, but we the public either didn’t understand, didn’t care, or chose to ignore the warnings. That’s not totally the public’s fault. We’re only now seeing a big red example of what it means for one company, controlled by one man, to have control over seemingly limitless personal information. Even the NSA can’t keep its secret hacking tools on lockdown, why would Facebook be able to protect your information? In many respects, it was just giving it away.

Source: Facebook Just Made a Shocking Admission, and We’re All Too Exhausted to Notice

Cambridge Analytica whistleblower: Facebook data could have come from more than 87 million users

Cambridge Analytica whistleblower Christopher Wylie says the data the firm gathered from Facebook could have come from more than 87 million users and could be stored in Russia.
The number of Facebook users whose personal information was accessed by Cambridge Analytica “could be higher, absolutely,” than the 87 million users acknowledged by Facebook, Wylie told NBC’s Chuck Todd during a “Meet the Press” segment Sunday.
Wylie added that his lawyer has been contacted by US authorities, including congressional investigators and the Department of Justice, and says he plans to cooperate with them.
“We’re just setting out dates that I can actually go and sit down and meet with the authorities,” he said.
The former Cambridge Analytica employee said that “a lot of people” had access to the data and referenced a “genuine risk” that the harvested data could be stored in Russia.
“It could be stored in various parts of the world, including Russia, given the fact that the professor who was managing the data harvesting process was going back and forth between the UK and to Russia,” Wylie said.
Aleksander Kogan, a Russian data scientist who gave lectures at St. Petersburg State University, gathered Facebook data from millions of Americans. He then sold it to Cambridge Analytica, which worked with President Donald Trump’s 2016 presidential campaign.
When asked if he thought Facebook was even able to calculate the number of users affected, Wylie stressed that data can be copied once it leaves a database.
“I know that Facebook is now starting to take steps to rectify that and start to find out who had access to it and where it could have gone, but ultimately it’s not watertight to say that, you know, we can ensure that all the data is gone forever,” he said.

Source: Cambridge Analytica whistleblower: Facebook data could have come from more than 87 million users – CNNPolitics

Yes, Cops Are Now Opening iPhones With Dead People’s Fingerprints

Separate sources close to local and federal police investigations in New York and Ohio, who asked to remain anonymous as they weren’t authorized to speak on record, said it was now relatively common for fingerprints of the deceased to be depressed on the scanner of Apple iPhones, devices which have been wrapped up in increasingly powerful encryption over recent years. For instance, the technique has been used in overdose cases, said one source. In such instances, the victim’s phone could contain information leading directly to the dealer.

And it’s entirely legal for police to use the technique, even if there might be some ethical quandaries to consider. Marina Medvin, owner of Medvin Law, said that once a person is deceased, they no longer have a privacy interest in their dead body. That means they no longer have standing in court to assert privacy rights.

Relatives or other interested parties have little chance of stopping cops using fingerprints or other body parts to access smartphones too. “Once you share information with someone, you lose control over how that information is protected and used. You cannot assert your privacy rights when your friend’s phone is searched and the police see the messages that you sent to your friend. Same goes for sharing information with the deceased – after you released information to the deceased, you have lost control of privacy,” Medvin added.

Police know it too. “We do not need a search warrant to get into a victim’s phone, unless it’s shared owned,” said Ohio police homicide detective Robert Cutshall, who worked on the Artan case. In previous cases detailed by Forbes police have required warrants to use the fingerprints of the living on their iPhones.

[…]

Police are now looking at how they might use Apple’s Face ID facial recognition technology, introduced on the iPhone X. And it could provide an easier path into iPhones than Touch ID.

Marc Rogers, researcher and head of information security at Cloudflare, told Forbes he’d been poking at Face ID in recent months and had discovered it didn’t appear to require the visage of a living person to work. Whilst Face ID is supposed to use your attention in combination with natural eye movement, so fake or non-moving eyes can’t unlock devices, Rogers found that the tech can be fooled simply using photos of open eyes. That was something also verified by Vietnamese researchers when they claimed to have bypassed Face ID with specially-created masks in November 2017, said Rogers.

Secondly, Rogers discovered this was possible from many angles and the phone only seemed to need to see one open eye to unlock. “In that sense it’s easier to unlock than Touch ID – all you need to do is show your target his or her phone and the moment they glance it unlocks,” he added. Apple declined to comment for this article.

Source: Yes, Cops Are Now Opening iPhones With Dead People’s Fingerprints

Great, Now Delta airlines Is Normalizing Casual Fingerprinting

Delta Airlines announced Monday that it’s rolling out biometric entry at its line of airport lounges. With the press of two fingers, Delta members will be able to enter any of Delta’s 50 exclusive lounges for drinks, comfortably unaware of the encroaching dystopian biometric surveillance structure closing around travel.

Thanks to a partnership with Clear, a biometrics company offering a “frictionless travel experience,” privileged jet-setters can use their fingerprints to enter Delta Sky Clubs.

[…]

But, this veneer of comfort masks that biometrics are a form of surveillance hotly contested by privacy and civil liberties experts. For example, face recognition in airports is consistently less accurate on women and people of color, yet are asymmetrically applied against them as they travel. Clear uses finger and iris data, but Delta was the nation’s first to use face recognition to verify passports, again via autonomized self-service kiosks.

At a time when people should be more wary of biometrics, airports are carefully rebranding surveillance as a luxury item. But, as people become more comfortable with being poked, prodded, fingerprinted, and scanned as they travel, privacy is becoming a fast-evaporating luxury.

Source: Great, Now an Airline Is Normalizing Casual Fingerprinting

Please remember that you can’t change your biometrics (easily), so beware about leaving them in some database secured who knows how and shared with who knows who.

Facebook Acknowledges It Has Been Keeping Records of Android Users’ Calls, Texts

Last week, a user found that Facebook had a record of the date, time, duration, and recipient of calls he had made from the past few years. A couple days later, Ars Technica published an account of several others — all Android users — who found similar records. Now, Slate Magazine is reporting that Facebook has acknowledged that it was collecting and storing these logs, “attributing it to an opt-in feature for those using Messenger or Facebook Lite on an Android device.” The company did however deny that it was collecting call or text history without a user’s permission. From the report: “This helps you find and stay connected with the people you care about, and provides you with a better experience across Facebook,” the company said in a post Sunday. “People have to expressly agree to use this feature. We introduced this feature for Android users a couple of years ago. Contact importers are fairly common among social apps and services as a way to more easily find the people you want to connect with.”

Ars Technica refuted their claim that everyone knowingly opted in. Instead, Ars Technica’s Sean Gallagher claimed, that opt-in was the default setting and users were not separately alerted to it. Nor did Facebook ever say publicly that it was collecting that information. “Facebook says that the company keeps the data secure and does not sell it to third parties,” Gallagher wrote. “But the post doesn’t address why it would be necessary to retain not just the numbers of contacts from phone calls and SMS messages, but the date, time, and length of those calls for years.”

Source: Facebook Acknowledges It Has Been Keeping Records of Android Users’ Calls, Texts – Slashdot

New Slack Tool Lets Your Boss Potentially Access Far More of Your Data Than Before, without notification

According to Slack’s new guidelines, however, Compliance Exports will be replaced by “a self-service export tool” on April 20th. Previously, an employer had to request a data dump of all communications to get access to private channels and direct messages. This new tool should streamline things so they can archive all your shit-talk and time-wasting with colleagues on a regular basis. The tool not only makes it easy for an admin to access everything with a few clicks, it also enables automatic exports to be scheduled on a daily, weekly, or monthly basis. An employer still has to go through a request process to get the tool, but Slack declined to elaborate on what’s involved in that process.

What’s particularly concerning is that Compliance Exports were designed so they notified users when they were enabled, and future exports only covered data that was generated after that notification. A spokesperson for Slack confirmed to Gizmodo that this won’t be the case going forward. The new tool will be able to export all of the data that your Slack settings previously retained. Whereas before, if you were up on Slack policy, you could feel pretty comfortable that your private conversations were private unless you got that Compliance Exports notification. After the notification, you’d want to make sure you didn’t discuss potentially sensitive topics in Slack. Now, anyone who was under the impression that they were relatively safe might have some cause to worry.

Source: New Slack Tool Lets Your Boss Potentially Access Far More of Your Data Than Before

How to Find Out Everything Facebook Knows About You

If you can’t bring yourself to delete your Facebook account entirely, you’re probably thinking about sharing a lot less private information on the site. The company actually makes it pretty easy to find out how much data it’s collected from you, but the results might be a little scary.

When software developer Dylan McKay went and downloaded all of his data from Facebook, he was shocked to find that the social network had timestamps on every phone call and SMS message he made in the past few years, even though he says doesn’t use the app for calls or texts. It even created a log of every call between McKay and his partner’s mom.

To get your own data dump, head to your Facebook Settings and click on “Download a copy of your data” at the bottom of the page. Facebook needs a little time to compile all that information, but it should be ready in about 10 minutes based on my own experience. You’ll receive a notification sending you to a page where you can download the data—after re-entering your account password, of course.

The (likely huge) file downloads onto your computer as a ZIP. Once you extract it, open the new folder and click on the “index.html” to view the data in your browser.

Be sure to check out the Contact Info tab for a list of everyone you’ve ever known and their phone number (creepy, Facebook). You can also scroll down to the bottom of the Friends tab so see what phase of your life Facebook thinks you’re in —I got “Starting Adult Life.”

Source: How to Find Out Everything Facebook Knows About You

US cops go all Minority Report: Google told to cough up info on anyone near a crime scene

Efforts to track down criminals in the US state of North Carolina have laid bare a dangerous gap in the law over the use of location data.

Raleigh police went to court at least three times last year and got a warrant requiring Google to share the details of any users that were close to crime scenes during specific times and dates.

The first crime was the murder of a cab driver in November 2016, the second an arson attack in March 2017 and the third, sexual battery, in August 2017 – suggesting that the police force is using the approach to discover potentially incriminating evidence for increasingly less serious crimes.

In each case, the cops used GPS coordinates to draw a rough rectangle around the areas of interest – covering nearly 20 acres in the murder case – and asked for the details of any users that entered those areas in time periods of between 60 to 90 minutes e.g. between 1800 and 1930.

The warrants were granted by a judge complete with an order to prevent disclosure so Google was legally prevented from informing impacted users that their details had been shared with law enforcement. Google complied with the warrants.

It is worth noting that the data haul is not limited to users of Google hardware i.e. phones running Android but also any phone that ran Google apps – which encompasses everything from its driving app service to its calendar, browser, predictive keyboard and so on.

Source: US cops go all Minority Report: Google told to cough up info on anyone near a crime scene • The Register

Over investigation seems like a real breach of privacy to me. That Google collects this information in a fashion that it can be easily supplied is a real shocker.

Telegram Loses Bid to Block Russia From Encryption Keys

Telegram, the encrypted messaging app that’s prized by those seeking privacy, lost a bid before Russia’s Supreme Court to block security services from getting access to users’ data, giving President Vladimir Putin a victory in his effort to keep tabs on electronic communications.

Supreme Court Judge Alla Nazarova on Tuesday rejected Telegram’s appeal against the Federal Security Service, the successor to the KGB spy agency which last year asked the company to share its encryption keys. Telegram declined to comply and was hit with a fine of $14,000. Communications regulator Roskomnadzor said Telegram now has 15 days to provide the encryption keys.

Telegram, which is in the middle of an initial coin offering of as much as $2.55 billion, plans to appeal the ruling in a process that may last into the summer, according to the company’s lawyer, Ramil Akhmetgaliev. Any decision to block the service would require a separate court ruling, the lawyer said.

“Threats to block Telegram unless it gives up private data of its users won’t bear fruit. Telegram will stand for freedom and privacy,” Pavel Durov, the company’s founder, said on his Twitter page.

Putin signed laws in 2016 on fighting terrorism, which included a requirement for messaging services to provide the authorities with means to decrypt user correspondence. Telegram challenged an auxiliary order by the Federal Security Service, claiming that the procedure doesn’t involve a court order and breaches constitutional rights for privacy, according to documents.

The security agency, known as the FSB, argued in court that obtaining the encryption keys doesn’t violate users’ privacy because the keys by themselves aren’t considered information of restricted access. Collecting data on particular suspects using the encryption would still require a court order, the agency said.

“The FSB’s argument that encryption keys can’t be considered private information defended by the Constitution is cunning,” Akhmetgaliev, Telegram’s lawyer, told reporters after the hearing. “It’s like saying, ‘I’ve got a password from your email, but I don’t control your email, I just have the possibility to control.’”

Source: Telegram Loses Bid to Block Russia From Encryption Keys – Bloomberg

Windows 10 S (for Surface) and Cortana force you to use Edge and Bing, and Windows Mail forces links to open in Edge

Windows 10 S, Microsoft’s new locked-down operating system that comes bundled with the Surface Laptop, won’t allow you to change the default Web browser away from Microsoft’s own Edge. Furthermore, Edge’s default search provider can’t be altered: Bing is all you get.

Curiously you can download other browsers from the Windows Store, such as Opera Mini, but Windows 10 S won’t let you set it as the default browser: if you try to open an HTML file, or click a link in another app, it will always open in Edge, according to Microsoft’s official FAQ on the topic.

The FAQ uses very direct language: “Microsoft Edge is the default web browser on Microsoft 10 S. The default search provider in Microsoft Edge and Internet Explorer cannot be changed.” It isn’t clear if OEMs will be able to override this feature of Windows 10 S.

It’s worth noting at this juncture that Windows 10 S, much like its spiritual predecessor Windows RT, will only run apps that you download from the Windows Store—and currently, neither Firefox or Chrome have been packaged up for the Windows Store. I can’t imagine that Google will be super-keen to bring Chrome to the Windows Store if Windows 10 S users can’t change the default browser.

Source: Windows 10 S forces you to use Edge and Bing | Ars Technica

Edge might be Windows 10’s built-in browser, but it definitely isn’t the most popular browser — NetMarketShare reported just under 4 percent usage share as of February 2018, slipping well below Chrome’s 59 percent. And now, it looks like the company may be trying to boost its share through software policies. The company is testing a Windows 10 preview release in the Skip Ahead ring which opens all Windows Mail web links in Edge, regardless of your app defaults. It provides the “best, most secure and consistent experience,” Microsoft argued.

The move isn’t coming completely out of the blue. Microsoft required Cortana users to rely on Bing search and open any web content in Edge, so this is arguably an extension of that policy.

Even so, the move is likely to irk at least some Windows 10 users. To start, its claims are highly subjective. Edge certainly isn’t immune to security exploits, and relying on it could actually create an inconsistent experience if you aren’t completely invested in Microsoft software. If you use Chrome on an Android phone, wouldn’t you want every link on your PC to open in Chrome so that they’re easier to retrieve when you’re on your handset? We can’t imagine that European antitrust regulators would be happy about Microsoft locking users into its own browser, either. We’ve asked Microsoft if it can comment on the concerns and will let you know if it has something to say.

Microsoft tests forcing Windows Mail users to open links in Edge

Booking Flights: Our Data Flies with Us – the huge dataset described

Every time you book a flight, you generate personal data that is ripe for harvesting: information like the details on an ID card, your address, your passport information and your travel itinerary, as well as your frequent-flyer number, method of payment and travel preferences (dietary restrictions, mobility restrictions, etc.). All that data becomes part of a registry, in the form of a Passenger Name Record (PNR) – a generic name given to records created by aircraft operators or their authorised agents for each journey booked by or on behalf of any passenger.

When we book a flight or travel itinerary, the travel agent or booking website creates our PNR. Most airlines or travel agents choose to host their PNR databases on a specialised computer reservation system (CRS) or a Global Distribution System (GDS), which coordinates the information from all the travel agents and airlines worldwide, to avoid things like duplicated flight reservations. This means that CRSs/GDSs centralise and store vast amounts of data about travellers. Though we are focusing on air travel here, it is important to note that the PNR is not only flight-related. It can also include other services such as car rentals, hotel reservations and train trips.
[…]
A PNR isn’t necessarily created all at once. If we use the same agency or airline to book our flight and other services, like a hotel, the agency will use the same PNR. Therefore, information from many different sources will be gradually added to our PNR through different channels over time. That means the dataset is much larger than just the flight info: a PNR can contain data as important as our exact whereabouts at specific points in time.

What are the implications of all this for our privacy? The journalist and travel advocate Edward Hasbrouck has been researching and denouncing the PNR’s effects on privacy in the US for decades. In Europe, organisations like European Digital Rights (EDRi) have also criticised PNRs extensively through their advocacy and awareness campaigns. According to Hasbrouck:

PNR data reveals our associations, our activities, and our tastes and preferences. It shows where we went, when, with whom, for how long, and at whose expense. Through departmental and project billing codes, business travel PNR’s reveal confidential internal corporate and other organisation structures and lines of authority and show which people were involved in work together, even if they travelled separately. PNRs typically contain credit card numbers, telephone numbers, email addresses, and IP addresses, allowing them to be easily merged with financial and communications metadata

Your individual PNR also contains a section for free-text “remarks” that can be entered by the airline, the travel agency, a tour operator, a third-party call centre or the staff of the ground-handling contractor. Such texts might include sensitive and private information, like special meal requests and particular medical needs. This may seem innocuous, but information like special meal requests can indicate our religious or political affiliations, especially when it is cross-referenced with other details included in our PNR. Regardless of whether the profile assigned to us is accurate, the repercussions and implications of that profiling are concerning – especially in the absence of public awareness about them.
[…]
In the United States, PNRs are stored in the Automated Targeting System-Passenger (ATS-P), where they become part of an active database for up to five years (after the first six months, they are de-personalised and masked). After five years, the data is transferred to a dormant database for up to ten more years, where it remains available for counter-terrorism purposes for the full duration of its 15-year retention.

According to Edward Hasbrouck, PNRs cannot be deleted: once created, they are archived and retained in the Computer Reservation Data and You and/or Global Distribution Data and You (CRS/GDS), and can still be viewed, even if we never bought a ticket and cancelled our reservations:

“CRS’s retain flown, archived, purged, and deleted PNR’s indefinitely. It doesn’t really matter whether governments store copies of entire PNR’s or only portions of them, whether they filter out certain especially “sensitive” data from their copies of PNR’s, or for how long they retain them. As long as a government agency has the record locator or the airline name, flight number, and date, they can retrieve the complete PNR from the CRS. That’s especially true for the U.S. government, since even PNR’s created by airlines, travel agencies, tour operators, or airline offices in other countries, for flights within and between other countries that don’t touch the USA, are routinely stored in CRS’s based in the USA.
[…]
Under EU regulations, governments can retain PNR data for a maximum of five years, to allow law-enforcement officials to access it if necessary. The regulations state that after six months, the data is masked out or anonymised. But according to research by the EDRi, records are not necessarily anonymised or encrypted, and, in fact, the data can be easily re-personalised.
[…]
PNR is a relatively old system, pre-dating the internet as we know it today. Airlines have built their own systems on top of this, allowing passengers to make adjustments to their reservations using a six-character booking confirmation number or PNR locator. But although the PNR system was originally designed to facilitate the sharing of information rather than the protection of it, in the current digital environment and with the cyber-threats facing our data online, this system needs to be updated to keep up with the existing risks. PNRs are information-rich files are not only of interest for governments; they are also valuable to third parties – whether corporations or adversaries. Potential uses of the data could include anything from marketing research to hacks aimed at obtaining our personal information for financial scams or even doxxing or inflicting harm on activists.

According to Hasbrouck, the controls over who can access PNR data are insufficient, and there are no limitations on how CRS/GDS users (whether governments or travel agents) can access it. Furthermore, there are no records of when a CRS/GDS user has retrieved a PNR, from where they retrieved the record, or for what purpose. This means that any travel agent or any government can retrieve our PNR and access all the data it contains, no questions asked and without leaving a trace.
[…]
Photos of our tickets or luggage tags pose particular risks because of the sensitive information printed on them. In addition to our name and flight information, they also include our PNR locator, though sometimes only inside the barcode. Even if we cannot “see” information in the barcodes or sequences of letters and numbers on our tickets, other people may be able to derive meaning from them.

Source: Booking Flights: Our Data Flies with Us – Our Data Our Selves

Palantir has secretly been using New Orleans to test its predictive policing technology, was given huge access to lots of private data without oversight due to loophole

The program began in 2012 as a partnership between New Orleans Police and Palantir Technologies, a data-mining firm founded with seed money from the CIA’s venture capital firm. According to interviews and documents obtained by The Verge, the initiative was essentially a predictive policing program, similar to the “heat list” in Chicago that purports to predict which people are likely drivers or victims of violence.

The partnership has been extended three times, with the third extension scheduled to expire on February 21st, 2018. The city of New Orleans and Palantir have not responded to questions about the program’s current status.

Predictive policing technology has proven highly controversial wherever it is implemented, but in New Orleans, the program escaped public notice, partly because Palantir established it as a philanthropic relationship with the city through Mayor Mitch Landrieu’s signature NOLA For Life program. Thanks to its philanthropic status, as well as New Orleans’ “strong mayor” model of government, the agreement never passed through a public procurement process.

In fact, key city council members and attorneys contacted by The Verge had no idea that the city had any sort of relationship with Palantir, nor were they aware that Palantir used its program in New Orleans to market its services to another law enforcement agency for a multimillion-dollar contract.

Even within the law enforcement community, there are concerns about the potential civil liberties implications of the sort of individualized prediction Palantir developed in New Orleans, and whether it’s appropriate for the American criminal justice system.

“They’re creating a target list, but we’re not going after Al Qaeda in Syria,” said a former law enforcement official who has observed Palantir’s work first-hand as well as the company’s sales pitches for predictive policing. The former official spoke on condition of anonymity to freely discuss their concerns with data mining and predictive policing. “Palantir is a great example of an absolutely ridiculous amount of money spent on a tech tool that may have some application,” the former official said. “However, it’s not the right tool for local and state law enforcement.”

Six years ago, one of the world’s most secretive and powerful tech firms developed a contentious intelligence product in a city that has served as a neoliberal laboratory for everything from charter schools to radical housing reform since Hurricane Katrina. Because the program was never public, important questions about its basic functioning, risk for bias, and overall propriety were never answered.
[…]
Palantir’s prediction model in New Orleans used an intelligence technique called social network analysis (or SNA) to draw connections between people, places, cars, weapons, addresses, social media posts, and other indicia in previously siloed databases. Think of the analysis as a practical version of a Mark Lombardi painting that highlights connections between people, places, and events. After entering a query term — like a partial license plate, nickname, address, phone number, or social media handle or post — NOPD’s analyst would review the information scraped by Palantir’s software and determine which individuals are at the greatest risk of either committing violence or becoming a victim, based on their connection to known victims or assailants.

The data on individuals came from information scraped from social media as well as NOPD criminal databases for ballistics, gangs, probation and parole information, jailhouse phone calls, calls for service, the central case management system (i.e., every case NOPD had on record), and the department’s repository of field interview cards. The latter database represents every documented encounter NOPD has with citizens, even those that don’t result in arrests. In 2010, The Times-Picayune revealed that Chief Serpas had mandated that the collection of field interview cards be used as a measure of officer and district performance, resulting in over 70,000 field interview cards filled out in 2011 and 2012. The practice resembled NYPD’s “stop and frisk” program and was instituted with the express purpose of gathering as much intelligence on New Orleanians as possible, regardless of whether or not they committed a crime.
[…]
NOPD then used the list of potential victims and perpetrators of violence generated by Palantir to target individuals for the city’s CeaseFire program. CeaseFire is a form of the decades-old carrot-and-stick strategy developed by David Kennedy, a professor at John Jay College in New York. In the program, law enforcement informs potential offenders with criminal records that they know of their past actions and will prosecute them to the fullest extent if they re-offend. If the subjects choose to cooperate, they are “called in” to a required meeting as part of their conditions of probation and parole and are offered job training, education, potential job placement, and health services. In New Orleans, the CeaseFire program is run under the broader umbrella of NOLA For Life, which is Mayor Landrieu’s pet project that he has funded through millions of dollars from private donors.

According to Serpas, the person who initially ran New Orleans’ social network analysis from 2013 through 2015 was Jeff Asher, a former intelligence agent who joined NOPD from the CIA. If someone had been shot, Serpas explained, Asher would use Palantir’s software to find people associated with them through field interviews or social media data. “This data analysis brings up names and connections between people on FIs [field interview cards], on traffic stops, on victims of reports, reporting victims of crimes together, whatever the case may be. That kind of information is valuable for anybody who’s doing an investigation,” Serpas said.
[…]
Of the 308 people who participated in call-ins from October 2012 through March 2017, seven completed vocational training, nine completed “paid work experience,” none finished a high school diploma or GED course, and 32 were employed at one time or another through referrals. Fifty participants were detained following their call-in, and two have since died.

By contrast, law enforcement vigorously pursued its end of the program. From November 2012, when the new Multi-Agency Gang Unit was founded, through March 2014, racketeering indictments escalated: 83 alleged gang members in eight gangs were indicted in the 16-month period, according to an internal Palantir presentation.
[…]
Call-ins declined precipitously after the first few years. According to city records, eight group call-ins took place from 2012 to 2014, but only three took place in the following three years. Robert Goodman, a New Orleans native who became a community activist after completing a prison sentence for murder, worked as a “responder” for the city’s CeaseFire program until August 2016, discouraging people from engaging in retaliatory violence. Over time, Goodman noticed more of an emphasis on the “stick” component of the program and more control over the non-punitive aspects of the program by city hall that he believes undermined the intervention work. “It’s supposed to be ran by people like us instead of the city trying to dictate to us how this thing should look,” he said. “As long as they’re not putting resources into the hoods, nothing will change. You’re just putting on Band-Aids.”

After the first two years of Palantir’s involvement with NOPD, the city saw a marked drop in murders and gun violence, but it was short-lived. Even former NOPD Chief Serpas believes that the preventative effect of calling in dozens of at-risk individuals — and indicting dozens of them — began to diminish.

“When we ended up with nearly nine or 10 indictments with close to 100 defendants for federal or state RICO violations of killing people in the community, I think we got a lot of people’s attention in that criminal environment,” Serpas said, referring to the racketeering indictments. “But over time, it must’ve wore off because before I left in August of ‘14, we could see that things were starting to slide”

Nick Corsaro, the University of Cincinnati professor who helped build NOPD’s gang database, also worked on an evaluation of New Orleans’ CeaseFire strategy. He found that New Orleans’ overall decline in homicides coincided with the city’s implementation of CeaseFire program, but the Central City neighborhoods targeted by the program “did not have statistically significant declines that corresponded with November 2012 onset date.”
[…]
The secrecy surrounding the NOPD program also raises questions about whether defendants have been given evidence they have a right to view. Sarah St. Vincent, a researcher at Human Rights Watch, recently published an 18-month investigation into parallel construction, or the practice of law enforcement concealing evidence gathered from surveillance activity. In an interview, St. Vincent said that law enforcement withholding intelligence gathering or analysis like New Orleans’ predictive policing work effectively kneecaps the checks and balances of the criminal justice system. At the Cato Institute’s 2017 Surveillance Conference in December, St. Vincent raised concerns about why information garnered from predictive policing systems was not appearing in criminal indictments or complaints.

“It’s the role of the judge to evaluate whether what the government did in this case was legal,” St. Vincent said of the New Orleans program. “I do think defense attorneys would be right to be concerned about the use of programs that might be inaccurate, discriminatory, or drawing from unconstitutional data.”

If Palantir’s partnership with New Orleans had been public, the issues of legality, transparency, and propriety could have been hashed out in a public forum during an informed discussion with legislators, law enforcement, the company, and the public. For six years, that never happened.

Source: Palantir has secretly been using New Orleans to test its predictive policing technology – The Verge

One of the big problems here is that there is no knowledge and hardly any oversight on the program. There is no knowledge if the system is being implemented fairly or cost effectively (costs are huge!) or even if it works. It seemed to have worked for a while but the effects seemed also to drop off after two years in operations, mainly because they used the “stick” method to counter crime but more and more got rid of the “carrot”. The amount of private data given to Palantir without any discussion or consent is worrying to say the least.

Hey Microsoft, Stop Installing Apps On My PC Without Asking

I’m getting sick of Windows 10’s auto-installing apps. Apps like Facebook are now showing up out of nowhere, and even displaying notifications begging for me to use them. I didn’t install the Facebook app, I didn’t give it permission to show notifications, and I’ve never even used it. So why is it bugging me?

Windows 10 has always been a little annoying about these apps, but it wasn’t always this bad. Microsoft went from “we pinned a few tiles, but the apps aren’t installed until you click them” to “the apps are now automatically installed on your PC” to “the automatically installed apps are now sending you notifications”. It’s ridiculous.
The “Microsoft Consumer Experience” Is Consumer-Hostile…

This is all thanks to the “Microsoft Consumer Experience” program, which can’t be disabled on normal Windows 10 Home or Professional systems. That’s why every Windows 10 computer you start using has these bonus apps. The exact apps preinstalled can vary, but I’ve never seen a Windows 10 PC without Candy Crush.

The Microsoft Consumer Experience is actually a background task that runs whenever you sign into a Windows 10 PC with a new user account for the first time. It kicks into gear and automatically downloads apps like Candy Crush Soda Saga, FarmVille 2: Country Escape, Facebook, TripAdvisor, and whatever else Microsoft feels like promoting.

You can uninstall the apps from your Start menu, and they shouldn’t come back on your user account the same hardware. However, the apps will also come back whenever you sign into a new PC with the same Microsoft account, forcing you to remove them on each device you use. And, if someone signs into your same PC with their own Microsoft account, Microsoft will “helpfully” download those apps for their account as well. There’s no way to tell Microsoft “stop downloading these apps on my PC” or “I never want these apps on this Microsoft account”.
…and Microsoft Won’t Let Us Disable It

There is, technically, a way to disable this and stop Windows from installing these apps…but it’s only for Windows 10 Enterprise and Education users. Even if you spent $200 for a Windows 10 Professional license because you want to use your PC for business, Microsoft won’t let you stop the “Consumer Experience” on a professional PC.

Source: Hey Microsoft, Stop Installing Apps On My PC Without Asking

Together with Windows 10 sending private data to Redmond without permission this is another reason I have left the world of MS operating systems. I now use Linux Mint.

119,000 Passports and Photo IDs of FedEx Customers Found on Unsecured Amazon Server

Thousands of FedEx customers were exposed after the company left scanned passports, drivers licenses, and other documentation on a publicly accessible Amazon S3 server.

The scanned IDs originated from countries all over the world, including the United States, Mexico, Canada, Australia, Saudi Arabia, Japan, China, and several European countries. The IDs were attached to forms that included several pieces of personal information, including names, home addresses, phone numbers, and zip codes.

The server, discovered by researchers at the Kromtech Security Center, was secured as of Tuesday.

According to Kromtech, the server belonged to Bongo International LLC, a company that aided customers in performing shipping calculations and currency conversations, among other services. Bongo was purchased by FedEx in 2014 and renamed FedEx Cross-Border International a little over a year later. The service was discontinued in April 2017.

Source: 119,000 Passports and Photo IDs of FedEx Customers Found on Unsecured Amazon Server

Uzi Nissan Spent 8 Years Fighting The Car Company With His Name. He Nearly Lost Everything To Win. The legal system doesn’t work very well if you have no money.

Nissan the car company never really cared who Uzi Nissan was. Then it decided he had something it wanted very much—the website www.nissan.com, which he created for his small retail computer business in 1994—and it sued him for $10 million. When the two Nissans went to war, Uzi Nissan prevailed in the end, but lost almost everything along the way.

If you visit nissan.com expecting a polished presentation of Nissan’s latest lineup, you’re in for quite a shock. What you land in is Uzi Nissan’s corner of the internet; a shrine to the years of his life spent fighting what is now the largest car company on the planet.

You’re greeted with a straight-out-of-the-’90s web design with 3D-effect link buttons, minimal advertising, crossed-out Nissan Motor badges and a Nissan Computer logo design that seems to resemble a stamped business card.
[…]
If you further postpone your quest to get a quote on an Altima or a Rogue Sport and spend time to explore the site, you find pages and pages of articles on the Nissan Motor vs. Nissan Computer lawsuit, taught in business schools and law schools as one of the most notable domain cases from the age of the dotcom bubble.

“The study there is that you should first have your domain before you decide your name of business, and in law school it’s just to show that sometimes even the little guy can win,” he said.
[…]
At the time, it didn’t seem like the start of an all-consuming legal battle, a David vs. Goliath fight that took nearly 10 years and cost the small business owner millions of dollars—to say nothing of the incalculable toll on his personal life.

Source: Uzi Nissan Spent 8 Years Fighting The Car Company With His Name. He Nearly Lost Everything To Win

The story is well told and shows you how ridiculous it is that this guy who clearly had prior ownership to the Nissan name and domain name had to pony up near to $3m and 8 years of his life to keep what is rightfully his. There is no punishment for the big guy throwing resources to wast another person’s time and money in the courts.

2017: Dutch Military Intelligence 348 and Internal Intelligence 3205 taps placed. No idea how many the police did, but wow, that’s a lot!

De MIVD tapte vorig jaar in totaal 348 keer. De AIVD plaatste dat jaar 3.205 taps. Vandaag publiceerden beide diensten de tapstatistieken over de periode 2002 tot en met 2017 op hun website.

Source: MIVD tapte vorig jaar 348 keer | Nieuwsbericht | Defensie.nl


And of course we have no idea how many of these taps led to arrests or action.

How to Disable Facebook’s Facial Recognition Feature

To turn off facial recognition on your computer, click on the down arrow at the top of any Facebook page and then select Settings. From there, click “Face Recognition” from the left column, and then click “Do you want Facebook to be able to recognize you in photos and videos?” Select Yes or No based on your personal preferences.

On mobile, click on the three dots below your profile pic labeled “More” then select “View Privacy Shortcuts” then “More Settings,” followed by “Facial Recognition.” Click on the “Do you want Facebook to be able to recognize you in photos and videos?” button and select “No” to disable the feature.
[…]
The setting isn’t available in all countries, and will only appear as an option in your profile if you’re at least 18 years old and have the feature available to you.

Source: How to Disable Facebook’s Facial Recognition Feature

The 600+ Companies PayPal Shares Your Data With – Schneier on Security

One of the effects of GDPR — the new EU General Data Protection Regulation — is that we’re all going to be learning a lot more about who collects our data and what they do with it. Consider PayPal, that just released a list of over 600 companies they share customer data with. Here’s a good visualization of that data.

Is 600 companies unusual? Is it more than average? Less? We’ll soon know.

Source: The 600+ Companies PayPal Shares Your Data With – Schneier on Security

Madison Square Garden Has Used Face-Scanning Technology on Customers

Madison Square Garden has quietly used facial-recognition technology to bolster security and identify those entering the building, according to multiple people familiar with the arena’s security procedures.

The technology uses cameras to capture images of people, and then an algorithm compares the images to a database of photographs to help identify the person and, when used for security purposes, to determine if the person is considered a problem. The technology, which is sometimes used for marketing and promotions, has raised concerns over personal privacy and the security of any data that is stored by the system.

Source: Madison Square Garden Has Used Face-Scanning Technology on Customers

What is your personal info worth to criminals? There’s a dark web market price index for that

Your entire online identity could be worth little more than £800, according to brand new research into the illicit sale of stolen personal info on the dark web (or just $1,200 if you are in the United States, according to the US edition of the index). While it may be no surprise to learn that credit card details are the most traded, did you know that fraudsters are hacking Uber, Airbnb, Spotify and Netflix accounts and selling them for little more than £5 each?

Everything has a price on the dark web it seems. Paypal accounts with a healthy balance attract the highest prices (£280 on average). At the other end of the scale though, hacked Deliveroo or Tesco accounts sell for less than £5. Cybercriminals can easily spend more on their lunchtime sandwich than buying up stolen credentials for online shopping accounts like Argos (£3) and ASOS (£1.50).

The average person has dozens of accounts that form their online identity, all of which can be hacked and sold. Our team of security experts reviewed tens of thousands of listings on three of the most popular dark web markets, Dream, Point and Wall Street Market. These encrypted websites, which can only be reached using the Tor browser, allow criminals to anonymously sell stolen personal info, along with all sorts of other contraband, such as illicit drugs and weapons.

We focused on listings featuring stolen ID, hacked accounts and personal info relevant to the UK to create the Dark Web Market Price Index. We calculated average sale prices for each items and were shocked to see that £820 is all it would cost to buy up someone’s entire identity if they were to have all the listed items

Source: Dark Web Market Price Index (Feb 2018 – UK Edition) | Top10VPN.com

Sandvine’s PacketLogic Devices Used to Deploy Government Spyware in Turkey and Redirect Egyptian Users to Affiliate Ads when trying to download popular software

This report describes our investigation into the apparent use of Sandvine/Procera Networks Deep Packet Inspection (DPI) devices to deliver nation-state malware in Turkey and indirectly into Syria, and to covertly raise money through affiliate ads and cryptocurrency mining in Egypt.
Key Findings

Through Internet scanning, we found deep packet inspection (DPI) middleboxes on Türk Telekom’s network. The middleboxes were being used to redirect hundreds of users in Turkey and Syria to nation-state spyware when those users attempted to download certain legitimate Windows applications.
We found similar middleboxes at a Telecom Egypt demarcation point. On a number of occasions, the middleboxes were apparently being used to hijack Egyptian Internet users’ unencrypted web connections en masse, and redirect the users to revenue-generating content such as affiliate ads and browser cryptocurrency mining scripts.
After an extensive investigation, we matched characteristics of the network injection in Turkey and Egypt to Sandvine PacketLogic devices. We developed a fingerprint for the injection we found in Turkey, Syria, and Egypt and matched our fingerprint to a second-hand PacketLogic device that we procured and measured in a lab setting.
The apparent use of Sandvine devices to surreptitiously inject malicious and dubious redirects for users in Turkey, Syria, and Egypt raises significant human rights concerns.

1. Summary

This report describes how we used Internet scanning to uncover the apparent use of Sandvine/Procera Networks Deep Packet Inspection (DPI) devices (i.e. middleboxes) for malicious or dubious ends, likely by nation-states or ISPs in two countries.
1.1. Turkey

We found that a series of middleboxes on Türk Telekom’s network were being used to redirect hundreds of users attempting to download certain legitimate programs to versions of those programs bundled with spyware. The spyware we found bundled by operators was similar to that used in the StrongPity APT attacks. Before switching to the StrongPity spyware, the operators of the Turkey injection used the FinFisher “lawful intercept” spyware, which FinFisher asserts is sold only to government entities.

Targeted users in Turkey and Syria who downloaded Windows applications from official vendor websites including Avast Antivirus, CCleaner, Opera, and 7-Zip were silently redirected to malicious versions by way of injected HTTP redirects. This redirection was possible because official websites for these programs, even though they might have supported HTTPS, directed users to non-HTTPS downloads by default. Additionally, targeted users in Turkey and Syria who downloaded a wide range of applications from CBS Interactive’s Download.com (a platform featured by CNET to download software) were instead redirected to versions containing spyware. Download.com does not appear to support HTTPS despite purporting to offer “secure download” links.1

Source: BAD TRAFFIC: Sandvine’s PacketLogic Devices Used to Deploy Government Spyware in Turkey and Redirect Egyptian Users to Affiliate Ads?

MoviePass Is Tracking Your Location

According to Media Play News, MoviePass CEO Mitch Lowe had some interesting things to say during his Hollywood presentation that took place late last week, entitled “New Oil: How Will MoviePass Monetize It?” Most notably, he openly admitted that his app tracks people’s location, even when they’re not actively using the app:

“We get an enormous amount of information… We know all about. We watch how you drive from home to the movies. We watch where you go afterwards.”

Lowe also commented on how they knew subscribers’ addresses, their demographics, and how they can track subs via the app and the phone’s GPS. This drew nervous laughter from the crowd—many of whom were MoviePass subscribers themselves—but Lowe assured them that this collecting of tracking data fits into their long-term revenue plan. He explained that their vision is to “build a night at the movies,” with MoviePass eventually directing subscribers to places to eat before movies, and places to grab drinks afterward (all for a cut from the vendors).

We knew MoviePass was collecting data on us from the start—that’s how they plan to make their money—so how is this any different? Well, subscribers are claiming they didn’t clearly disclose such persistent location tracking in their privacy policy. In regard to location tracking, the privacy policy mentions a “single request” in a section titled “Check ins” that’s used when you’re selecting a theater and movie to watch. However, the section also mentions real-time location data “as a means to develop, improve and personalize the service.” It’s a vague statement that could mean just about anything, but it’s understandable if users didn’t assume it meant watching them wherever they went, even when they’re not using the app.

Source: MoviePass Is Tracking Your Location

Retina X ‘Stalkerware’ Shuts Down Apps ‘Indefinitely’ After Getting Hacked Again

A company that sells spyware to regular consumers is “immediately and indefinitely halting” all of its services, just a couple of weeks after a new damaging hack.

Retina-X Studios, which sells several products marketed to parents and employers to keep tabs on their children and employees—but also used by jealous partners to spy on their significant others—announced that its shutting down all its spyware apps on Tuesday with a message at the top of its website.

“Regrettably Retina-X Studios, which offers cutting edge technology that helps parents and employers gather important information on devices they own, has been the victim of sophisticated and repeated illegal hackings,” read the message, which was titled “important note” in all caps.

Got a tip? You can contact Lorenzo Franceschi-Bicchierai securely on Signal on +1 917 257 1382 and Joseph Cox on Signal on +44 20 8133 5190. Details on our SecureDrop, a system to anonymously submit documents or information, can be found here.

The company sells subscriptions to apps that allow the operator to access practically anything on a target’s phone or computer, such as text messages, emails, photos , and location information. Retina-X is just one of a slew of companies that sell such services, marketing them to everyday users—as opposed to law enforcement or intelligence agencies. Some critics call these apps “Stalkerware.”

Source: ‘Stalkerware’ Seller Shuts Down Apps ‘Indefinitely’ After Getting Hacked Again – Motherboard