Most of 2.2 billion Facebook users had their data scraped by externals – because it was easy to do

At this point, the social media company is just going for broke, telling the public it should just assume that “most” of the 2.2 billion Facebook users have probably had their public data scraped by “malicious actors.”

[…]

Meanwhile, reports have focused on a variety of issues that have popped up in just the last 24 hours. It’s hard to focus on what matters—and frankly, all of it seems to matter, so in turn, it ends up feeling like none of it does. This is the Trump PR playbook, and Facebook is running it perfectly. It’s the media version of too big to fail, call it too big to matter. Let us suggest that you just zero in on one detail from yesterday’s blog post about new restrictions on data access on the platform.

Mike Schroepfer, Facebook’s chief technology officer, explained that prior to yesterday, “people could enter another person’s phone number or email address into Facebook search to help find them.” This function would help you cut through all the John Smiths and locate the page of your John Smith. He gave the example of Bangladesh where the tool was used for 7 percent of all searches. Thing is, it was also useful to data-scrapers. Schroepfer wrote:

However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well.

The full meaning of that paragraph might not be readily apparent, but imagine you’re a hacker who bought a huge database of phone numbers on the dark web. Those numbers might have some use on their own, but they become way more useful for breaking into individual systems or committing fraud if you can attach more data to them. Facebook is saying that this kind of malicious actor would regularly take one of those numbers and use the platform to hunt down all publicly available data on its owner. This process, of course, could be automated and reap huge rewards with little effort. Suddenly, the hacker might have a user’s number, photos, marriage status, email address, birthday, location, pet names, and more—an excellent toolkit to do some damage.

In yesterday’s Q&A, Zuckerberg explained that Facebook did have some basic protections to prevent the sort of automation that makes this particularly convenient, but “we did see a number of folks who cycled through many thousands of IPs, hundreds of thousands of IP addresses to evade the rate-limiting system, and that wasn’t a problem we really had a solution to.” The ultimate solution was to shut the features down. As far as the impact goes, “I think the thing people should assume, given this is a feature that’s been available for a while—and a lot of people use it in the right way—but we’ve also seen some scraping, I would assume if you had that setting turned on, that someone at some point has accessed your public information in this way,” Zuckerberg said. Did you have that setting turned on? Ever? Given that Facebook says “most” accounts were affected, it’s safe to assume you did.

[…]

Mark Zuckerberg has known from the beginning that his creation was bad for privacy and security. Activists, the press, and tech experts have been saying it for years, but we the public either didn’t understand, didn’t care, or chose to ignore the warnings. That’s not totally the public’s fault. We’re only now seeing a big red example of what it means for one company, controlled by one man, to have control over seemingly limitless personal information. Even the NSA can’t keep its secret hacking tools on lockdown, why would Facebook be able to protect your information? In many respects, it was just giving it away.

Source: Facebook Just Made a Shocking Admission, and We’re All Too Exhausted to Notice

Cambridge Analytica whistleblower: Facebook data could have come from more than 87 million users

Cambridge Analytica whistleblower Christopher Wylie says the data the firm gathered from Facebook could have come from more than 87 million users and could be stored in Russia.
The number of Facebook users whose personal information was accessed by Cambridge Analytica “could be higher, absolutely,” than the 87 million users acknowledged by Facebook, Wylie told NBC’s Chuck Todd during a “Meet the Press” segment Sunday.
Wylie added that his lawyer has been contacted by US authorities, including congressional investigators and the Department of Justice, and says he plans to cooperate with them.
“We’re just setting out dates that I can actually go and sit down and meet with the authorities,” he said.
The former Cambridge Analytica employee said that “a lot of people” had access to the data and referenced a “genuine risk” that the harvested data could be stored in Russia.
“It could be stored in various parts of the world, including Russia, given the fact that the professor who was managing the data harvesting process was going back and forth between the UK and to Russia,” Wylie said.
Aleksander Kogan, a Russian data scientist who gave lectures at St. Petersburg State University, gathered Facebook data from millions of Americans. He then sold it to Cambridge Analytica, which worked with President Donald Trump’s 2016 presidential campaign.
When asked if he thought Facebook was even able to calculate the number of users affected, Wylie stressed that data can be copied once it leaves a database.
“I know that Facebook is now starting to take steps to rectify that and start to find out who had access to it and where it could have gone, but ultimately it’s not watertight to say that, you know, we can ensure that all the data is gone forever,” he said.

Source: Cambridge Analytica whistleblower: Facebook data could have come from more than 87 million users – CNNPolitics

Yes, Cops Are Now Opening iPhones With Dead People’s Fingerprints

Separate sources close to local and federal police investigations in New York and Ohio, who asked to remain anonymous as they weren’t authorized to speak on record, said it was now relatively common for fingerprints of the deceased to be depressed on the scanner of Apple iPhones, devices which have been wrapped up in increasingly powerful encryption over recent years. For instance, the technique has been used in overdose cases, said one source. In such instances, the victim’s phone could contain information leading directly to the dealer.

And it’s entirely legal for police to use the technique, even if there might be some ethical quandaries to consider. Marina Medvin, owner of Medvin Law, said that once a person is deceased, they no longer have a privacy interest in their dead body. That means they no longer have standing in court to assert privacy rights.

Relatives or other interested parties have little chance of stopping cops using fingerprints or other body parts to access smartphones too. “Once you share information with someone, you lose control over how that information is protected and used. You cannot assert your privacy rights when your friend’s phone is searched and the police see the messages that you sent to your friend. Same goes for sharing information with the deceased – after you released information to the deceased, you have lost control of privacy,” Medvin added.

Police know it too. “We do not need a search warrant to get into a victim’s phone, unless it’s shared owned,” said Ohio police homicide detective Robert Cutshall, who worked on the Artan case. In previous cases detailed by Forbes police have required warrants to use the fingerprints of the living on their iPhones.

[…]

Police are now looking at how they might use Apple’s Face ID facial recognition technology, introduced on the iPhone X. And it could provide an easier path into iPhones than Touch ID.

Marc Rogers, researcher and head of information security at Cloudflare, told Forbes he’d been poking at Face ID in recent months and had discovered it didn’t appear to require the visage of a living person to work. Whilst Face ID is supposed to use your attention in combination with natural eye movement, so fake or non-moving eyes can’t unlock devices, Rogers found that the tech can be fooled simply using photos of open eyes. That was something also verified by Vietnamese researchers when they claimed to have bypassed Face ID with specially-created masks in November 2017, said Rogers.

Secondly, Rogers discovered this was possible from many angles and the phone only seemed to need to see one open eye to unlock. “In that sense it’s easier to unlock than Touch ID – all you need to do is show your target his or her phone and the moment they glance it unlocks,” he added. Apple declined to comment for this article.

Source: Yes, Cops Are Now Opening iPhones With Dead People’s Fingerprints

Great, Now Delta airlines Is Normalizing Casual Fingerprinting

Delta Airlines announced Monday that it’s rolling out biometric entry at its line of airport lounges. With the press of two fingers, Delta members will be able to enter any of Delta’s 50 exclusive lounges for drinks, comfortably unaware of the encroaching dystopian biometric surveillance structure closing around travel.

Thanks to a partnership with Clear, a biometrics company offering a “frictionless travel experience,” privileged jet-setters can use their fingerprints to enter Delta Sky Clubs.

[…]

But, this veneer of comfort masks that biometrics are a form of surveillance hotly contested by privacy and civil liberties experts. For example, face recognition in airports is consistently less accurate on women and people of color, yet are asymmetrically applied against them as they travel. Clear uses finger and iris data, but Delta was the nation’s first to use face recognition to verify passports, again via autonomized self-service kiosks.

At a time when people should be more wary of biometrics, airports are carefully rebranding surveillance as a luxury item. But, as people become more comfortable with being poked, prodded, fingerprinted, and scanned as they travel, privacy is becoming a fast-evaporating luxury.

Source: Great, Now an Airline Is Normalizing Casual Fingerprinting

Please remember that you can’t change your biometrics (easily), so beware about leaving them in some database secured who knows how and shared with who knows who.

Facebook Acknowledges It Has Been Keeping Records of Android Users’ Calls, Texts

Last week, a user found that Facebook had a record of the date, time, duration, and recipient of calls he had made from the past few years. A couple days later, Ars Technica published an account of several others — all Android users — who found similar records. Now, Slate Magazine is reporting that Facebook has acknowledged that it was collecting and storing these logs, “attributing it to an opt-in feature for those using Messenger or Facebook Lite on an Android device.” The company did however deny that it was collecting call or text history without a user’s permission. From the report: “This helps you find and stay connected with the people you care about, and provides you with a better experience across Facebook,” the company said in a post Sunday. “People have to expressly agree to use this feature. We introduced this feature for Android users a couple of years ago. Contact importers are fairly common among social apps and services as a way to more easily find the people you want to connect with.”

Ars Technica refuted their claim that everyone knowingly opted in. Instead, Ars Technica’s Sean Gallagher claimed, that opt-in was the default setting and users were not separately alerted to it. Nor did Facebook ever say publicly that it was collecting that information. “Facebook says that the company keeps the data secure and does not sell it to third parties,” Gallagher wrote. “But the post doesn’t address why it would be necessary to retain not just the numbers of contacts from phone calls and SMS messages, but the date, time, and length of those calls for years.”

Source: Facebook Acknowledges It Has Been Keeping Records of Android Users’ Calls, Texts – Slashdot

New Slack Tool Lets Your Boss Potentially Access Far More of Your Data Than Before, without notification

According to Slack’s new guidelines, however, Compliance Exports will be replaced by “a self-service export tool” on April 20th. Previously, an employer had to request a data dump of all communications to get access to private channels and direct messages. This new tool should streamline things so they can archive all your shit-talk and time-wasting with colleagues on a regular basis. The tool not only makes it easy for an admin to access everything with a few clicks, it also enables automatic exports to be scheduled on a daily, weekly, or monthly basis. An employer still has to go through a request process to get the tool, but Slack declined to elaborate on what’s involved in that process.

What’s particularly concerning is that Compliance Exports were designed so they notified users when they were enabled, and future exports only covered data that was generated after that notification. A spokesperson for Slack confirmed to Gizmodo that this won’t be the case going forward. The new tool will be able to export all of the data that your Slack settings previously retained. Whereas before, if you were up on Slack policy, you could feel pretty comfortable that your private conversations were private unless you got that Compliance Exports notification. After the notification, you’d want to make sure you didn’t discuss potentially sensitive topics in Slack. Now, anyone who was under the impression that they were relatively safe might have some cause to worry.

Source: New Slack Tool Lets Your Boss Potentially Access Far More of Your Data Than Before

How to Find Out Everything Facebook Knows About You

If you can’t bring yourself to delete your Facebook account entirely, you’re probably thinking about sharing a lot less private information on the site. The company actually makes it pretty easy to find out how much data it’s collected from you, but the results might be a little scary.

When software developer Dylan McKay went and downloaded all of his data from Facebook, he was shocked to find that the social network had timestamps on every phone call and SMS message he made in the past few years, even though he says doesn’t use the app for calls or texts. It even created a log of every call between McKay and his partner’s mom.

To get your own data dump, head to your Facebook Settings and click on “Download a copy of your data” at the bottom of the page. Facebook needs a little time to compile all that information, but it should be ready in about 10 minutes based on my own experience. You’ll receive a notification sending you to a page where you can download the data—after re-entering your account password, of course.

The (likely huge) file downloads onto your computer as a ZIP. Once you extract it, open the new folder and click on the “index.html” to view the data in your browser.

Be sure to check out the Contact Info tab for a list of everyone you’ve ever known and their phone number (creepy, Facebook). You can also scroll down to the bottom of the Friends tab so see what phase of your life Facebook thinks you’re in —I got “Starting Adult Life.”

Source: How to Find Out Everything Facebook Knows About You

US cops go all Minority Report: Google told to cough up info on anyone near a crime scene

Efforts to track down criminals in the US state of North Carolina have laid bare a dangerous gap in the law over the use of location data.

Raleigh police went to court at least three times last year and got a warrant requiring Google to share the details of any users that were close to crime scenes during specific times and dates.

The first crime was the murder of a cab driver in November 2016, the second an arson attack in March 2017 and the third, sexual battery, in August 2017 – suggesting that the police force is using the approach to discover potentially incriminating evidence for increasingly less serious crimes.

In each case, the cops used GPS coordinates to draw a rough rectangle around the areas of interest – covering nearly 20 acres in the murder case – and asked for the details of any users that entered those areas in time periods of between 60 to 90 minutes e.g. between 1800 and 1930.

The warrants were granted by a judge complete with an order to prevent disclosure so Google was legally prevented from informing impacted users that their details had been shared with law enforcement. Google complied with the warrants.

It is worth noting that the data haul is not limited to users of Google hardware i.e. phones running Android but also any phone that ran Google apps – which encompasses everything from its driving app service to its calendar, browser, predictive keyboard and so on.

Source: US cops go all Minority Report: Google told to cough up info on anyone near a crime scene • The Register

Over investigation seems like a real breach of privacy to me. That Google collects this information in a fashion that it can be easily supplied is a real shocker.

Telegram Loses Bid to Block Russia From Encryption Keys

Telegram, the encrypted messaging app that’s prized by those seeking privacy, lost a bid before Russia’s Supreme Court to block security services from getting access to users’ data, giving President Vladimir Putin a victory in his effort to keep tabs on electronic communications.

Supreme Court Judge Alla Nazarova on Tuesday rejected Telegram’s appeal against the Federal Security Service, the successor to the KGB spy agency which last year asked the company to share its encryption keys. Telegram declined to comply and was hit with a fine of $14,000. Communications regulator Roskomnadzor said Telegram now has 15 days to provide the encryption keys.

Telegram, which is in the middle of an initial coin offering of as much as $2.55 billion, plans to appeal the ruling in a process that may last into the summer, according to the company’s lawyer, Ramil Akhmetgaliev. Any decision to block the service would require a separate court ruling, the lawyer said.

“Threats to block Telegram unless it gives up private data of its users won’t bear fruit. Telegram will stand for freedom and privacy,” Pavel Durov, the company’s founder, said on his Twitter page.

Putin signed laws in 2016 on fighting terrorism, which included a requirement for messaging services to provide the authorities with means to decrypt user correspondence. Telegram challenged an auxiliary order by the Federal Security Service, claiming that the procedure doesn’t involve a court order and breaches constitutional rights for privacy, according to documents.

The security agency, known as the FSB, argued in court that obtaining the encryption keys doesn’t violate users’ privacy because the keys by themselves aren’t considered information of restricted access. Collecting data on particular suspects using the encryption would still require a court order, the agency said.

“The FSB’s argument that encryption keys can’t be considered private information defended by the Constitution is cunning,” Akhmetgaliev, Telegram’s lawyer, told reporters after the hearing. “It’s like saying, ‘I’ve got a password from your email, but I don’t control your email, I just have the possibility to control.’”

Source: Telegram Loses Bid to Block Russia From Encryption Keys – Bloomberg

Booking Flights: Our Data Flies with Us – the huge dataset described

Every time you book a flight, you generate personal data that is ripe for harvesting: information like the details on an ID card, your address, your passport information and your travel itinerary, as well as your frequent-flyer number, method of payment and travel preferences (dietary restrictions, mobility restrictions, etc.). All that data becomes part of a registry, in the form of a Passenger Name Record (PNR) – a generic name given to records created by aircraft operators or their authorised agents for each journey booked by or on behalf of any passenger.

When we book a flight or travel itinerary, the travel agent or booking website creates our PNR. Most airlines or travel agents choose to host their PNR databases on a specialised computer reservation system (CRS) or a Global Distribution System (GDS), which coordinates the information from all the travel agents and airlines worldwide, to avoid things like duplicated flight reservations. This means that CRSs/GDSs centralise and store vast amounts of data about travellers. Though we are focusing on air travel here, it is important to note that the PNR is not only flight-related. It can also include other services such as car rentals, hotel reservations and train trips.
[…]
A PNR isn’t necessarily created all at once. If we use the same agency or airline to book our flight and other services, like a hotel, the agency will use the same PNR. Therefore, information from many different sources will be gradually added to our PNR through different channels over time. That means the dataset is much larger than just the flight info: a PNR can contain data as important as our exact whereabouts at specific points in time.

What are the implications of all this for our privacy? The journalist and travel advocate Edward Hasbrouck has been researching and denouncing the PNR’s effects on privacy in the US for decades. In Europe, organisations like European Digital Rights (EDRi) have also criticised PNRs extensively through their advocacy and awareness campaigns. According to Hasbrouck:

PNR data reveals our associations, our activities, and our tastes and preferences. It shows where we went, when, with whom, for how long, and at whose expense. Through departmental and project billing codes, business travel PNR’s reveal confidential internal corporate and other organisation structures and lines of authority and show which people were involved in work together, even if they travelled separately. PNRs typically contain credit card numbers, telephone numbers, email addresses, and IP addresses, allowing them to be easily merged with financial and communications metadata

Your individual PNR also contains a section for free-text “remarks” that can be entered by the airline, the travel agency, a tour operator, a third-party call centre or the staff of the ground-handling contractor. Such texts might include sensitive and private information, like special meal requests and particular medical needs. This may seem innocuous, but information like special meal requests can indicate our religious or political affiliations, especially when it is cross-referenced with other details included in our PNR. Regardless of whether the profile assigned to us is accurate, the repercussions and implications of that profiling are concerning – especially in the absence of public awareness about them.
[…]
In the United States, PNRs are stored in the Automated Targeting System-Passenger (ATS-P), where they become part of an active database for up to five years (after the first six months, they are de-personalised and masked). After five years, the data is transferred to a dormant database for up to ten more years, where it remains available for counter-terrorism purposes for the full duration of its 15-year retention.

According to Edward Hasbrouck, PNRs cannot be deleted: once created, they are archived and retained in the Computer Reservation Data and You and/or Global Distribution Data and You (CRS/GDS), and can still be viewed, even if we never bought a ticket and cancelled our reservations:

“CRS’s retain flown, archived, purged, and deleted PNR’s indefinitely. It doesn’t really matter whether governments store copies of entire PNR’s or only portions of them, whether they filter out certain especially “sensitive” data from their copies of PNR’s, or for how long they retain them. As long as a government agency has the record locator or the airline name, flight number, and date, they can retrieve the complete PNR from the CRS. That’s especially true for the U.S. government, since even PNR’s created by airlines, travel agencies, tour operators, or airline offices in other countries, for flights within and between other countries that don’t touch the USA, are routinely stored in CRS’s based in the USA.
[…]
Under EU regulations, governments can retain PNR data for a maximum of five years, to allow law-enforcement officials to access it if necessary. The regulations state that after six months, the data is masked out or anonymised. But according to research by the EDRi, records are not necessarily anonymised or encrypted, and, in fact, the data can be easily re-personalised.
[…]
PNR is a relatively old system, pre-dating the internet as we know it today. Airlines have built their own systems on top of this, allowing passengers to make adjustments to their reservations using a six-character booking confirmation number or PNR locator. But although the PNR system was originally designed to facilitate the sharing of information rather than the protection of it, in the current digital environment and with the cyber-threats facing our data online, this system needs to be updated to keep up with the existing risks. PNRs are information-rich files are not only of interest for governments; they are also valuable to third parties – whether corporations or adversaries. Potential uses of the data could include anything from marketing research to hacks aimed at obtaining our personal information for financial scams or even doxxing or inflicting harm on activists.

According to Hasbrouck, the controls over who can access PNR data are insufficient, and there are no limitations on how CRS/GDS users (whether governments or travel agents) can access it. Furthermore, there are no records of when a CRS/GDS user has retrieved a PNR, from where they retrieved the record, or for what purpose. This means that any travel agent or any government can retrieve our PNR and access all the data it contains, no questions asked and without leaving a trace.
[…]
Photos of our tickets or luggage tags pose particular risks because of the sensitive information printed on them. In addition to our name and flight information, they also include our PNR locator, though sometimes only inside the barcode. Even if we cannot “see” information in the barcodes or sequences of letters and numbers on our tickets, other people may be able to derive meaning from them.

Source: Booking Flights: Our Data Flies with Us – Our Data Our Selves

Palantir has secretly been using New Orleans to test its predictive policing technology, was given huge access to lots of private data without oversight due to loophole

The program began in 2012 as a partnership between New Orleans Police and Palantir Technologies, a data-mining firm founded with seed money from the CIA’s venture capital firm. According to interviews and documents obtained by The Verge, the initiative was essentially a predictive policing program, similar to the “heat list” in Chicago that purports to predict which people are likely drivers or victims of violence.

The partnership has been extended three times, with the third extension scheduled to expire on February 21st, 2018. The city of New Orleans and Palantir have not responded to questions about the program’s current status.

Predictive policing technology has proven highly controversial wherever it is implemented, but in New Orleans, the program escaped public notice, partly because Palantir established it as a philanthropic relationship with the city through Mayor Mitch Landrieu’s signature NOLA For Life program. Thanks to its philanthropic status, as well as New Orleans’ “strong mayor” model of government, the agreement never passed through a public procurement process.

In fact, key city council members and attorneys contacted by The Verge had no idea that the city had any sort of relationship with Palantir, nor were they aware that Palantir used its program in New Orleans to market its services to another law enforcement agency for a multimillion-dollar contract.

Even within the law enforcement community, there are concerns about the potential civil liberties implications of the sort of individualized prediction Palantir developed in New Orleans, and whether it’s appropriate for the American criminal justice system.

“They’re creating a target list, but we’re not going after Al Qaeda in Syria,” said a former law enforcement official who has observed Palantir’s work first-hand as well as the company’s sales pitches for predictive policing. The former official spoke on condition of anonymity to freely discuss their concerns with data mining and predictive policing. “Palantir is a great example of an absolutely ridiculous amount of money spent on a tech tool that may have some application,” the former official said. “However, it’s not the right tool for local and state law enforcement.”

Six years ago, one of the world’s most secretive and powerful tech firms developed a contentious intelligence product in a city that has served as a neoliberal laboratory for everything from charter schools to radical housing reform since Hurricane Katrina. Because the program was never public, important questions about its basic functioning, risk for bias, and overall propriety were never answered.
[…]
Palantir’s prediction model in New Orleans used an intelligence technique called social network analysis (or SNA) to draw connections between people, places, cars, weapons, addresses, social media posts, and other indicia in previously siloed databases. Think of the analysis as a practical version of a Mark Lombardi painting that highlights connections between people, places, and events. After entering a query term — like a partial license plate, nickname, address, phone number, or social media handle or post — NOPD’s analyst would review the information scraped by Palantir’s software and determine which individuals are at the greatest risk of either committing violence or becoming a victim, based on their connection to known victims or assailants.

The data on individuals came from information scraped from social media as well as NOPD criminal databases for ballistics, gangs, probation and parole information, jailhouse phone calls, calls for service, the central case management system (i.e., every case NOPD had on record), and the department’s repository of field interview cards. The latter database represents every documented encounter NOPD has with citizens, even those that don’t result in arrests. In 2010, The Times-Picayune revealed that Chief Serpas had mandated that the collection of field interview cards be used as a measure of officer and district performance, resulting in over 70,000 field interview cards filled out in 2011 and 2012. The practice resembled NYPD’s “stop and frisk” program and was instituted with the express purpose of gathering as much intelligence on New Orleanians as possible, regardless of whether or not they committed a crime.
[…]
NOPD then used the list of potential victims and perpetrators of violence generated by Palantir to target individuals for the city’s CeaseFire program. CeaseFire is a form of the decades-old carrot-and-stick strategy developed by David Kennedy, a professor at John Jay College in New York. In the program, law enforcement informs potential offenders with criminal records that they know of their past actions and will prosecute them to the fullest extent if they re-offend. If the subjects choose to cooperate, they are “called in” to a required meeting as part of their conditions of probation and parole and are offered job training, education, potential job placement, and health services. In New Orleans, the CeaseFire program is run under the broader umbrella of NOLA For Life, which is Mayor Landrieu’s pet project that he has funded through millions of dollars from private donors.

According to Serpas, the person who initially ran New Orleans’ social network analysis from 2013 through 2015 was Jeff Asher, a former intelligence agent who joined NOPD from the CIA. If someone had been shot, Serpas explained, Asher would use Palantir’s software to find people associated with them through field interviews or social media data. “This data analysis brings up names and connections between people on FIs [field interview cards], on traffic stops, on victims of reports, reporting victims of crimes together, whatever the case may be. That kind of information is valuable for anybody who’s doing an investigation,” Serpas said.
[…]
Of the 308 people who participated in call-ins from October 2012 through March 2017, seven completed vocational training, nine completed “paid work experience,” none finished a high school diploma or GED course, and 32 were employed at one time or another through referrals. Fifty participants were detained following their call-in, and two have since died.

By contrast, law enforcement vigorously pursued its end of the program. From November 2012, when the new Multi-Agency Gang Unit was founded, through March 2014, racketeering indictments escalated: 83 alleged gang members in eight gangs were indicted in the 16-month period, according to an internal Palantir presentation.
[…]
Call-ins declined precipitously after the first few years. According to city records, eight group call-ins took place from 2012 to 2014, but only three took place in the following three years. Robert Goodman, a New Orleans native who became a community activist after completing a prison sentence for murder, worked as a “responder” for the city’s CeaseFire program until August 2016, discouraging people from engaging in retaliatory violence. Over time, Goodman noticed more of an emphasis on the “stick” component of the program and more control over the non-punitive aspects of the program by city hall that he believes undermined the intervention work. “It’s supposed to be ran by people like us instead of the city trying to dictate to us how this thing should look,” he said. “As long as they’re not putting resources into the hoods, nothing will change. You’re just putting on Band-Aids.”

After the first two years of Palantir’s involvement with NOPD, the city saw a marked drop in murders and gun violence, but it was short-lived. Even former NOPD Chief Serpas believes that the preventative effect of calling in dozens of at-risk individuals — and indicting dozens of them — began to diminish.

“When we ended up with nearly nine or 10 indictments with close to 100 defendants for federal or state RICO violations of killing people in the community, I think we got a lot of people’s attention in that criminal environment,” Serpas said, referring to the racketeering indictments. “But over time, it must’ve wore off because before I left in August of ‘14, we could see that things were starting to slide”

Nick Corsaro, the University of Cincinnati professor who helped build NOPD’s gang database, also worked on an evaluation of New Orleans’ CeaseFire strategy. He found that New Orleans’ overall decline in homicides coincided with the city’s implementation of CeaseFire program, but the Central City neighborhoods targeted by the program “did not have statistically significant declines that corresponded with November 2012 onset date.”
[…]
The secrecy surrounding the NOPD program also raises questions about whether defendants have been given evidence they have a right to view. Sarah St. Vincent, a researcher at Human Rights Watch, recently published an 18-month investigation into parallel construction, or the practice of law enforcement concealing evidence gathered from surveillance activity. In an interview, St. Vincent said that law enforcement withholding intelligence gathering or analysis like New Orleans’ predictive policing work effectively kneecaps the checks and balances of the criminal justice system. At the Cato Institute’s 2017 Surveillance Conference in December, St. Vincent raised concerns about why information garnered from predictive policing systems was not appearing in criminal indictments or complaints.

“It’s the role of the judge to evaluate whether what the government did in this case was legal,” St. Vincent said of the New Orleans program. “I do think defense attorneys would be right to be concerned about the use of programs that might be inaccurate, discriminatory, or drawing from unconstitutional data.”

If Palantir’s partnership with New Orleans had been public, the issues of legality, transparency, and propriety could have been hashed out in a public forum during an informed discussion with legislators, law enforcement, the company, and the public. For six years, that never happened.

Source: Palantir has secretly been using New Orleans to test its predictive policing technology – The Verge

One of the big problems here is that there is no knowledge and hardly any oversight on the program. There is no knowledge if the system is being implemented fairly or cost effectively (costs are huge!) or even if it works. It seemed to have worked for a while but the effects seemed also to drop off after two years in operations, mainly because they used the “stick” method to counter crime but more and more got rid of the “carrot”. The amount of private data given to Palantir without any discussion or consent is worrying to say the least.

119,000 Passports and Photo IDs of FedEx Customers Found on Unsecured Amazon Server

Thousands of FedEx customers were exposed after the company left scanned passports, drivers licenses, and other documentation on a publicly accessible Amazon S3 server.

The scanned IDs originated from countries all over the world, including the United States, Mexico, Canada, Australia, Saudi Arabia, Japan, China, and several European countries. The IDs were attached to forms that included several pieces of personal information, including names, home addresses, phone numbers, and zip codes.

The server, discovered by researchers at the Kromtech Security Center, was secured as of Tuesday.

According to Kromtech, the server belonged to Bongo International LLC, a company that aided customers in performing shipping calculations and currency conversations, among other services. Bongo was purchased by FedEx in 2014 and renamed FedEx Cross-Border International a little over a year later. The service was discontinued in April 2017.

Source: 119,000 Passports and Photo IDs of FedEx Customers Found on Unsecured Amazon Server

2017: Dutch Military Intelligence 348 and Internal Intelligence 3205 taps placed. No idea how many the police did, but wow, that’s a lot!

De MIVD tapte vorig jaar in totaal 348 keer. De AIVD plaatste dat jaar 3.205 taps. Vandaag publiceerden beide diensten de tapstatistieken over de periode 2002 tot en met 2017 op hun website.

Source: MIVD tapte vorig jaar 348 keer | Nieuwsbericht | Defensie.nl


And of course we have no idea how many of these taps led to arrests or action.

How to Disable Facebook’s Facial Recognition Feature

To turn off facial recognition on your computer, click on the down arrow at the top of any Facebook page and then select Settings. From there, click “Face Recognition” from the left column, and then click “Do you want Facebook to be able to recognize you in photos and videos?” Select Yes or No based on your personal preferences.

On mobile, click on the three dots below your profile pic labeled “More” then select “View Privacy Shortcuts” then “More Settings,” followed by “Facial Recognition.” Click on the “Do you want Facebook to be able to recognize you in photos and videos?” button and select “No” to disable the feature.
[…]
The setting isn’t available in all countries, and will only appear as an option in your profile if you’re at least 18 years old and have the feature available to you.

Source: How to Disable Facebook’s Facial Recognition Feature

The 600+ Companies PayPal Shares Your Data With – Schneier on Security

One of the effects of GDPR — the new EU General Data Protection Regulation — is that we’re all going to be learning a lot more about who collects our data and what they do with it. Consider PayPal, that just released a list of over 600 companies they share customer data with. Here’s a good visualization of that data.

Is 600 companies unusual? Is it more than average? Less? We’ll soon know.

Source: The 600+ Companies PayPal Shares Your Data With – Schneier on Security

Madison Square Garden Has Used Face-Scanning Technology on Customers

Madison Square Garden has quietly used facial-recognition technology to bolster security and identify those entering the building, according to multiple people familiar with the arena’s security procedures.

The technology uses cameras to capture images of people, and then an algorithm compares the images to a database of photographs to help identify the person and, when used for security purposes, to determine if the person is considered a problem. The technology, which is sometimes used for marketing and promotions, has raised concerns over personal privacy and the security of any data that is stored by the system.

Source: Madison Square Garden Has Used Face-Scanning Technology on Customers

What is your personal info worth to criminals? There’s a dark web market price index for that

Your entire online identity could be worth little more than £800, according to brand new research into the illicit sale of stolen personal info on the dark web (or just $1,200 if you are in the United States, according to the US edition of the index). While it may be no surprise to learn that credit card details are the most traded, did you know that fraudsters are hacking Uber, Airbnb, Spotify and Netflix accounts and selling them for little more than £5 each?

Everything has a price on the dark web it seems. Paypal accounts with a healthy balance attract the highest prices (£280 on average). At the other end of the scale though, hacked Deliveroo or Tesco accounts sell for less than £5. Cybercriminals can easily spend more on their lunchtime sandwich than buying up stolen credentials for online shopping accounts like Argos (£3) and ASOS (£1.50).

The average person has dozens of accounts that form their online identity, all of which can be hacked and sold. Our team of security experts reviewed tens of thousands of listings on three of the most popular dark web markets, Dream, Point and Wall Street Market. These encrypted websites, which can only be reached using the Tor browser, allow criminals to anonymously sell stolen personal info, along with all sorts of other contraband, such as illicit drugs and weapons.

We focused on listings featuring stolen ID, hacked accounts and personal info relevant to the UK to create the Dark Web Market Price Index. We calculated average sale prices for each items and were shocked to see that £820 is all it would cost to buy up someone’s entire identity if they were to have all the listed items

Source: Dark Web Market Price Index (Feb 2018 – UK Edition) | Top10VPN.com

MoviePass Is Tracking Your Location

According to Media Play News, MoviePass CEO Mitch Lowe had some interesting things to say during his Hollywood presentation that took place late last week, entitled “New Oil: How Will MoviePass Monetize It?” Most notably, he openly admitted that his app tracks people’s location, even when they’re not actively using the app:

“We get an enormous amount of information… We know all about. We watch how you drive from home to the movies. We watch where you go afterwards.”

Lowe also commented on how they knew subscribers’ addresses, their demographics, and how they can track subs via the app and the phone’s GPS. This drew nervous laughter from the crowd—many of whom were MoviePass subscribers themselves—but Lowe assured them that this collecting of tracking data fits into their long-term revenue plan. He explained that their vision is to “build a night at the movies,” with MoviePass eventually directing subscribers to places to eat before movies, and places to grab drinks afterward (all for a cut from the vendors).

We knew MoviePass was collecting data on us from the start—that’s how they plan to make their money—so how is this any different? Well, subscribers are claiming they didn’t clearly disclose such persistent location tracking in their privacy policy. In regard to location tracking, the privacy policy mentions a “single request” in a section titled “Check ins” that’s used when you’re selecting a theater and movie to watch. However, the section also mentions real-time location data “as a means to develop, improve and personalize the service.” It’s a vague statement that could mean just about anything, but it’s understandable if users didn’t assume it meant watching them wherever they went, even when they’re not using the app.

Source: MoviePass Is Tracking Your Location

Retina X ‘Stalkerware’ Shuts Down Apps ‘Indefinitely’ After Getting Hacked Again

A company that sells spyware to regular consumers is “immediately and indefinitely halting” all of its services, just a couple of weeks after a new damaging hack.

Retina-X Studios, which sells several products marketed to parents and employers to keep tabs on their children and employees—but also used by jealous partners to spy on their significant others—announced that its shutting down all its spyware apps on Tuesday with a message at the top of its website.

“Regrettably Retina-X Studios, which offers cutting edge technology that helps parents and employers gather important information on devices they own, has been the victim of sophisticated and repeated illegal hackings,” read the message, which was titled “important note” in all caps.

Got a tip? You can contact Lorenzo Franceschi-Bicchierai securely on Signal on +1 917 257 1382 and Joseph Cox on Signal on +44 20 8133 5190. Details on our SecureDrop, a system to anonymously submit documents or information, can be found here.

The company sells subscriptions to apps that allow the operator to access practically anything on a target’s phone or computer, such as text messages, emails, photos , and location information. Retina-X is just one of a slew of companies that sell such services, marketing them to everyday users—as opposed to law enforcement or intelligence agencies. Some critics call these apps “Stalkerware.”

Source: ‘Stalkerware’ Seller Shuts Down Apps ‘Indefinitely’ After Getting Hacked Again – Motherboard

The Car of the Future Will Sell Your Data

Picture this: You’re driving home from work, contemplating what to make for dinner, and as you idle at a red light near your neighborhood pizzeria, an ad offering $5 off a pepperoni pie pops up on your dashboard screen.

Are you annoyed that your car’s trying to sell you something, or pleasantly persuaded? Telenav Inc., a company developing in-car advertising software, is betting you won’t mind much. Car companies—looking to earn some extra money—hope so, too.

Automakers have been installing wireless connections in vehicles and collecting data for decades. But the sheer volume of software and sensors in new vehicles, combined with artificial intelligence that can sift through data at ever-quickening speeds, means new services and revenue streams are quickly emerging. The big question for automakers now is whether they can profit off all the driver data they’re capable of collecting without alienating consumers or risking backlash from Washington.

“Carmakers recognize they’re fighting a war over customer data,” said Roger Lanctot, who works with automakers on data monetization as a consultant for Strategy Analytics. “Your driving behavior, location, has monetary value, not unlike your search activity.”

Carmakers’ ultimate objective, Lanctot said, is to build a database of consumer preferences that could be aggregated and sold to outside vendors for marketing purposes, much like Google and Facebook do today.
[…]
Telenav, the Silicon Valley company looking to bring pop-up ads to your infotainment screen, has been testing a “freemium” model borrowed from streaming music services to entice drivers to share their data.

Say you can’t afford fancy features like embedded navigation or the ability to start your car through a mobile app. The original automaker will install them for free, so long as you’re willing to tolerate the occasional pop-up ad while idling at a red light. Owners of luxury cars won’t have to suffer such indignities, since the higher price tag paid likely would have already included an internet connection.
[…]
The pop-up car ads could generate an average of $30 annually per vehicle, to be split between Telenav and the automaker. He declined to say whether anyone has signed up for the software, which was just unveiled at CES, but added Telenav is in “deep discussions” with several manufacturers. Because of the long production cycles of the industry, it’ll be about three years before the ads will show up in new models.

Source: The Car of the Future Will Sell Your Data – Bloomberg

of course they bring in the fear factor, they wouldn’t be honest and talk about the profit factor. As soon as people start trying to scare you, you know they are trying to con you.

Auto executives emphasize that data-crunching will allow them to build a better driving experience—enabling cars to predict flat tires, find a parking space or charging station, or alert city managers to dangerous intersections where there are frequent accidents. Data collection could even help shield drivers from crime, Ford Motor Co.’s chief executive officer said last month at the CES technology trade show.

“If a robber got in the car and took off, would you want us to know where that robber went to catch him?” Jim Hackett asked the audience during a keynote in Las Vegas. “Are you willing to trade that?”

You spend huge amounts on a car, I really really don’t want it sending information back to the maker, much less having the maker sell that data!

Roses are red, Facebook is blue. Think private means private? More fool you

In a decision (PDF) handed down yesterday, chief judge Janet DiFiore said that a court could ask someone to hand over any relevant materials as part of discovery ahead of a trial – even if they are private.

The threshold for disclosure in a court case “is not whether the materials sought are private but whether they are reasonably calculated to contain relevant information”, she said.

The ruling is the latest in an ongoing battle over whether a woman injured in a horse-riding accident should hand over privately posted pictures to the man she has accused of negligence in the accident.

Kelly Forman suffered spinal and brain injuries after falling from a horse owned by Mark Henkins, who she accuses of fitting her with a faulty stirrup.

Forman said the accident had led to memory loss and difficulty communicating, which she said caused her to become reclusive and have problems using a computer or composing coherent messages.

Because Forman said she had been a regular Facebook user before the accident, Henkins sought an order to gain access to posts and photos she made privately on Facebook before and after the accident, saying this would provide evidence on how her lifestyle had been affected.

For instance, the court noted he argued that “the timestamps on Facebook messages would reveal the amount of time it takes the plaintiff to write a post or respond to a message”.
[…]
The judge acknowledged Forman’s argument that disclosure of social media materials posted under private settings was an “unjustified invasion of privacy”, but said that other private materials relevant to litigation – including medical records – can be ordered for disclosure.

DiFiore also noted that, although the court was assuming, for the purposes of resolving the case, that setting a post to “private” meant that the they should be characterised as such, there was “significant controversy” about this.

“Views range from the position taken by plaintiff that anything shielded by privacy settings is private, to the position taken by one commentator that anything contained in a social media website is not ‘private’,” she pointed out in a footnote.

Source: Roses are red, Facebook is blue. Think private means private? More fool you • The Register

Thanks to “consent” buried deep in sales agreements, car manufacturers are tracking tens of millions of US and EU cars

Millions of new cars sold in the US and Europe are “connected,” having some mechanism for exchanging data with their manufacturers after the cars are sold; these cars stream or batch-upload location data and other telemetry to their manufacturers, who argue that they are allowed to do virtually anything they want with this data, thanks to the “explicit consent” of the car owners — who signed a lengthy contract at purchase time that contained a vague and misleading clause deep in its fine-print.

Car manufacturers are mostly warehousing this data (leaving it vulnerable to leaks and breaches, search-warrants, government hacking and unethical employee snooping), and can’t articulate why they’re saving it or how they use it.

Much of this data ends up in “marketplaces” where data-sets from multiple auto-makers are merged, made uniform, and given identifiers that allow them to be cross-referenced with the massive corporate data-sets that already exist, and then offered on the open market to any bidder.

Source: Thanks to “consent” buried deep in sales agreements, car manufacturers are tracking tens of millions of US cars / Boing Boing

Microsoft whips out tool so you can measure Windows 10’s data-slurping creepiness

The software giant has produced a tool that’s claimed to show users how much personal information its Windows 10 operating system collects and sends back to Redmond for diagnostics.The application is dubbed Diagnostic Data Viewer, and is free from the Windows Store. It reveals that stuff like the computer’s device name, OS version, and serial number, as well as more detailed records such as installed apps, preference settings, and details on each application’s usage, are beamed back to Microsoft.
[…]
Microsoft says the Diagnostic Data Viewer will run separately from the Windows Privacy Dashboard that is bundled with Windows 10. That app will also be upgraded to provide users with more information on data collection, including activity history for the user’s Microsoft account.

Microsoft is also planning an update to the app to allow users to export dashboard reports, view media consumption information, and delete reported data (for some reason this isn’t already allowed).

The Dashboard and Data Viewer apps arrive after Microsoft was taken to task by governments for what many saw as overly intrusive data collection by Windows 10.

Source: Microsoft whips out tool so you can measure Windows 10’s data-slurping creepiness • The Register

US House reps green-light Fourth Amendment busting spy program

The US House of Representatives has passed a six-year extension to the controversial Section 702 spying program, rejecting an amendment that would have required the authorities to get a warrant before searching for information on US citizens.

The 256-164 vote effectively retains the status quo and undermines a multi-year effort to bring accountability to a program that critics argue breaks the Constitution. A bipartisan substitute amendment put forward by House reps Justin Amash (R-MI) and Zoe Lofgren (D-CA) and supported by both ends of the political spectrum was defeated 233-183.< [...] The already tense atmosphere in Washington DC over the issue was heightened when President Trump tweeted his apparent support of critics of the program just moments after the Amash-Lofgren amendment was discussed on Fox News./blockquote>

Source: US House reps green-light Fourth Amendment busting spy program • The Register

OnePlus Android mobes’ clipboard app caught phoning home to China

OnePlus has admitted that the clipboard app in a beta build of its Android OS was beaming back mystery data to a cloud service in China.

Someone running the latest test version of OnePlus’s Oreo-based operating system revealed in its support forums that unusual activity from the builtin clipboard manager had been detected by a firewall tool.

Upon closer inspection, the punter found that the app had been transmitting information to a block of IP addresses registered to Alibaba, the Chinese e-commerce and cloud hosting giant.
[…]
This should not come as much of a shock to those who follow the China-based OnePlus. In October last year, researchers discovered that OnePlus handsets were collecting unusually detailed reports on user activities, although the manufacturer said at the time it was only hoarding the data for its internal analytics. One month later, it was discovered that some phones had apparently been shipped with a developer kit left active, resulting in the phones sporting a hidden backdoor.

And lest we forget, today’s desktop and mobile operating systems are pretty gung-ho in phoning home information about their users, with Microsoft catching flak for Windows 10 telemetry in particular. ®

Source: OnePlus Android mobes’ clipboard app caught phoning home to China