New Slack Tool Lets Your Boss Potentially Access Far More of Your Data Than Before, without notification

According to Slack’s new guidelines, however, Compliance Exports will be replaced by “a self-service export tool” on April 20th. Previously, an employer had to request a data dump of all communications to get access to private channels and direct messages. This new tool should streamline things so they can archive all your shit-talk and time-wasting with colleagues on a regular basis. The tool not only makes it easy for an admin to access everything with a few clicks, it also enables automatic exports to be scheduled on a daily, weekly, or monthly basis. An employer still has to go through a request process to get the tool, but Slack declined to elaborate on what’s involved in that process.

What’s particularly concerning is that Compliance Exports were designed so they notified users when they were enabled, and future exports only covered data that was generated after that notification. A spokesperson for Slack confirmed to Gizmodo that this won’t be the case going forward. The new tool will be able to export all of the data that your Slack settings previously retained. Whereas before, if you were up on Slack policy, you could feel pretty comfortable that your private conversations were private unless you got that Compliance Exports notification. After the notification, you’d want to make sure you didn’t discuss potentially sensitive topics in Slack. Now, anyone who was under the impression that they were relatively safe might have some cause to worry.

Source: New Slack Tool Lets Your Boss Potentially Access Far More of Your Data Than Before

How to Find Out Everything Facebook Knows About You

If you can’t bring yourself to delete your Facebook account entirely, you’re probably thinking about sharing a lot less private information on the site. The company actually makes it pretty easy to find out how much data it’s collected from you, but the results might be a little scary.

When software developer Dylan McKay went and downloaded all of his data from Facebook, he was shocked to find that the social network had timestamps on every phone call and SMS message he made in the past few years, even though he says doesn’t use the app for calls or texts. It even created a log of every call between McKay and his partner’s mom.

To get your own data dump, head to your Facebook Settings and click on “Download a copy of your data” at the bottom of the page. Facebook needs a little time to compile all that information, but it should be ready in about 10 minutes based on my own experience. You’ll receive a notification sending you to a page where you can download the data—after re-entering your account password, of course.

The (likely huge) file downloads onto your computer as a ZIP. Once you extract it, open the new folder and click on the “index.html” to view the data in your browser.

Be sure to check out the Contact Info tab for a list of everyone you’ve ever known and their phone number (creepy, Facebook). You can also scroll down to the bottom of the Friends tab so see what phase of your life Facebook thinks you’re in —I got “Starting Adult Life.”

Source: How to Find Out Everything Facebook Knows About You

US cops go all Minority Report: Google told to cough up info on anyone near a crime scene

Efforts to track down criminals in the US state of North Carolina have laid bare a dangerous gap in the law over the use of location data.

Raleigh police went to court at least three times last year and got a warrant requiring Google to share the details of any users that were close to crime scenes during specific times and dates.

The first crime was the murder of a cab driver in November 2016, the second an arson attack in March 2017 and the third, sexual battery, in August 2017 – suggesting that the police force is using the approach to discover potentially incriminating evidence for increasingly less serious crimes.

In each case, the cops used GPS coordinates to draw a rough rectangle around the areas of interest – covering nearly 20 acres in the murder case – and asked for the details of any users that entered those areas in time periods of between 60 to 90 minutes e.g. between 1800 and 1930.

The warrants were granted by a judge complete with an order to prevent disclosure so Google was legally prevented from informing impacted users that their details had been shared with law enforcement. Google complied with the warrants.

It is worth noting that the data haul is not limited to users of Google hardware i.e. phones running Android but also any phone that ran Google apps – which encompasses everything from its driving app service to its calendar, browser, predictive keyboard and so on.

Source: US cops go all Minority Report: Google told to cough up info on anyone near a crime scene • The Register

Over investigation seems like a real breach of privacy to me. That Google collects this information in a fashion that it can be easily supplied is a real shocker.

Telegram Loses Bid to Block Russia From Encryption Keys

Telegram, the encrypted messaging app that’s prized by those seeking privacy, lost a bid before Russia’s Supreme Court to block security services from getting access to users’ data, giving President Vladimir Putin a victory in his effort to keep tabs on electronic communications.

Supreme Court Judge Alla Nazarova on Tuesday rejected Telegram’s appeal against the Federal Security Service, the successor to the KGB spy agency which last year asked the company to share its encryption keys. Telegram declined to comply and was hit with a fine of $14,000. Communications regulator Roskomnadzor said Telegram now has 15 days to provide the encryption keys.

Telegram, which is in the middle of an initial coin offering of as much as $2.55 billion, plans to appeal the ruling in a process that may last into the summer, according to the company’s lawyer, Ramil Akhmetgaliev. Any decision to block the service would require a separate court ruling, the lawyer said.

“Threats to block Telegram unless it gives up private data of its users won’t bear fruit. Telegram will stand for freedom and privacy,” Pavel Durov, the company’s founder, said on his Twitter page.

Putin signed laws in 2016 on fighting terrorism, which included a requirement for messaging services to provide the authorities with means to decrypt user correspondence. Telegram challenged an auxiliary order by the Federal Security Service, claiming that the procedure doesn’t involve a court order and breaches constitutional rights for privacy, according to documents.

The security agency, known as the FSB, argued in court that obtaining the encryption keys doesn’t violate users’ privacy because the keys by themselves aren’t considered information of restricted access. Collecting data on particular suspects using the encryption would still require a court order, the agency said.

“The FSB’s argument that encryption keys can’t be considered private information defended by the Constitution is cunning,” Akhmetgaliev, Telegram’s lawyer, told reporters after the hearing. “It’s like saying, ‘I’ve got a password from your email, but I don’t control your email, I just have the possibility to control.’”

Source: Telegram Loses Bid to Block Russia From Encryption Keys – Bloomberg

Booking Flights: Our Data Flies with Us – the huge dataset described

Every time you book a flight, you generate personal data that is ripe for harvesting: information like the details on an ID card, your address, your passport information and your travel itinerary, as well as your frequent-flyer number, method of payment and travel preferences (dietary restrictions, mobility restrictions, etc.). All that data becomes part of a registry, in the form of a Passenger Name Record (PNR) – a generic name given to records created by aircraft operators or their authorised agents for each journey booked by or on behalf of any passenger.

When we book a flight or travel itinerary, the travel agent or booking website creates our PNR. Most airlines or travel agents choose to host their PNR databases on a specialised computer reservation system (CRS) or a Global Distribution System (GDS), which coordinates the information from all the travel agents and airlines worldwide, to avoid things like duplicated flight reservations. This means that CRSs/GDSs centralise and store vast amounts of data about travellers. Though we are focusing on air travel here, it is important to note that the PNR is not only flight-related. It can also include other services such as car rentals, hotel reservations and train trips.
[…]
A PNR isn’t necessarily created all at once. If we use the same agency or airline to book our flight and other services, like a hotel, the agency will use the same PNR. Therefore, information from many different sources will be gradually added to our PNR through different channels over time. That means the dataset is much larger than just the flight info: a PNR can contain data as important as our exact whereabouts at specific points in time.

What are the implications of all this for our privacy? The journalist and travel advocate Edward Hasbrouck has been researching and denouncing the PNR’s effects on privacy in the US for decades. In Europe, organisations like European Digital Rights (EDRi) have also criticised PNRs extensively through their advocacy and awareness campaigns. According to Hasbrouck:

PNR data reveals our associations, our activities, and our tastes and preferences. It shows where we went, when, with whom, for how long, and at whose expense. Through departmental and project billing codes, business travel PNR’s reveal confidential internal corporate and other organisation structures and lines of authority and show which people were involved in work together, even if they travelled separately. PNRs typically contain credit card numbers, telephone numbers, email addresses, and IP addresses, allowing them to be easily merged with financial and communications metadata

Your individual PNR also contains a section for free-text “remarks” that can be entered by the airline, the travel agency, a tour operator, a third-party call centre or the staff of the ground-handling contractor. Such texts might include sensitive and private information, like special meal requests and particular medical needs. This may seem innocuous, but information like special meal requests can indicate our religious or political affiliations, especially when it is cross-referenced with other details included in our PNR. Regardless of whether the profile assigned to us is accurate, the repercussions and implications of that profiling are concerning – especially in the absence of public awareness about them.
[…]
In the United States, PNRs are stored in the Automated Targeting System-Passenger (ATS-P), where they become part of an active database for up to five years (after the first six months, they are de-personalised and masked). After five years, the data is transferred to a dormant database for up to ten more years, where it remains available for counter-terrorism purposes for the full duration of its 15-year retention.

According to Edward Hasbrouck, PNRs cannot be deleted: once created, they are archived and retained in the Computer Reservation Data and You and/or Global Distribution Data and You (CRS/GDS), and can still be viewed, even if we never bought a ticket and cancelled our reservations:

“CRS’s retain flown, archived, purged, and deleted PNR’s indefinitely. It doesn’t really matter whether governments store copies of entire PNR’s or only portions of them, whether they filter out certain especially “sensitive” data from their copies of PNR’s, or for how long they retain them. As long as a government agency has the record locator or the airline name, flight number, and date, they can retrieve the complete PNR from the CRS. That’s especially true for the U.S. government, since even PNR’s created by airlines, travel agencies, tour operators, or airline offices in other countries, for flights within and between other countries that don’t touch the USA, are routinely stored in CRS’s based in the USA.
[…]
Under EU regulations, governments can retain PNR data for a maximum of five years, to allow law-enforcement officials to access it if necessary. The regulations state that after six months, the data is masked out or anonymised. But according to research by the EDRi, records are not necessarily anonymised or encrypted, and, in fact, the data can be easily re-personalised.
[…]
PNR is a relatively old system, pre-dating the internet as we know it today. Airlines have built their own systems on top of this, allowing passengers to make adjustments to their reservations using a six-character booking confirmation number or PNR locator. But although the PNR system was originally designed to facilitate the sharing of information rather than the protection of it, in the current digital environment and with the cyber-threats facing our data online, this system needs to be updated to keep up with the existing risks. PNRs are information-rich files are not only of interest for governments; they are also valuable to third parties – whether corporations or adversaries. Potential uses of the data could include anything from marketing research to hacks aimed at obtaining our personal information for financial scams or even doxxing or inflicting harm on activists.

According to Hasbrouck, the controls over who can access PNR data are insufficient, and there are no limitations on how CRS/GDS users (whether governments or travel agents) can access it. Furthermore, there are no records of when a CRS/GDS user has retrieved a PNR, from where they retrieved the record, or for what purpose. This means that any travel agent or any government can retrieve our PNR and access all the data it contains, no questions asked and without leaving a trace.
[…]
Photos of our tickets or luggage tags pose particular risks because of the sensitive information printed on them. In addition to our name and flight information, they also include our PNR locator, though sometimes only inside the barcode. Even if we cannot “see” information in the barcodes or sequences of letters and numbers on our tickets, other people may be able to derive meaning from them.

Source: Booking Flights: Our Data Flies with Us – Our Data Our Selves

Palantir has secretly been using New Orleans to test its predictive policing technology, was given huge access to lots of private data without oversight due to loophole

The program began in 2012 as a partnership between New Orleans Police and Palantir Technologies, a data-mining firm founded with seed money from the CIA’s venture capital firm. According to interviews and documents obtained by The Verge, the initiative was essentially a predictive policing program, similar to the “heat list” in Chicago that purports to predict which people are likely drivers or victims of violence.

The partnership has been extended three times, with the third extension scheduled to expire on February 21st, 2018. The city of New Orleans and Palantir have not responded to questions about the program’s current status.

Predictive policing technology has proven highly controversial wherever it is implemented, but in New Orleans, the program escaped public notice, partly because Palantir established it as a philanthropic relationship with the city through Mayor Mitch Landrieu’s signature NOLA For Life program. Thanks to its philanthropic status, as well as New Orleans’ “strong mayor” model of government, the agreement never passed through a public procurement process.

In fact, key city council members and attorneys contacted by The Verge had no idea that the city had any sort of relationship with Palantir, nor were they aware that Palantir used its program in New Orleans to market its services to another law enforcement agency for a multimillion-dollar contract.

Even within the law enforcement community, there are concerns about the potential civil liberties implications of the sort of individualized prediction Palantir developed in New Orleans, and whether it’s appropriate for the American criminal justice system.

“They’re creating a target list, but we’re not going after Al Qaeda in Syria,” said a former law enforcement official who has observed Palantir’s work first-hand as well as the company’s sales pitches for predictive policing. The former official spoke on condition of anonymity to freely discuss their concerns with data mining and predictive policing. “Palantir is a great example of an absolutely ridiculous amount of money spent on a tech tool that may have some application,” the former official said. “However, it’s not the right tool for local and state law enforcement.”

Six years ago, one of the world’s most secretive and powerful tech firms developed a contentious intelligence product in a city that has served as a neoliberal laboratory for everything from charter schools to radical housing reform since Hurricane Katrina. Because the program was never public, important questions about its basic functioning, risk for bias, and overall propriety were never answered.
[…]
Palantir’s prediction model in New Orleans used an intelligence technique called social network analysis (or SNA) to draw connections between people, places, cars, weapons, addresses, social media posts, and other indicia in previously siloed databases. Think of the analysis as a practical version of a Mark Lombardi painting that highlights connections between people, places, and events. After entering a query term — like a partial license plate, nickname, address, phone number, or social media handle or post — NOPD’s analyst would review the information scraped by Palantir’s software and determine which individuals are at the greatest risk of either committing violence or becoming a victim, based on their connection to known victims or assailants.

The data on individuals came from information scraped from social media as well as NOPD criminal databases for ballistics, gangs, probation and parole information, jailhouse phone calls, calls for service, the central case management system (i.e., every case NOPD had on record), and the department’s repository of field interview cards. The latter database represents every documented encounter NOPD has with citizens, even those that don’t result in arrests. In 2010, The Times-Picayune revealed that Chief Serpas had mandated that the collection of field interview cards be used as a measure of officer and district performance, resulting in over 70,000 field interview cards filled out in 2011 and 2012. The practice resembled NYPD’s “stop and frisk” program and was instituted with the express purpose of gathering as much intelligence on New Orleanians as possible, regardless of whether or not they committed a crime.
[…]
NOPD then used the list of potential victims and perpetrators of violence generated by Palantir to target individuals for the city’s CeaseFire program. CeaseFire is a form of the decades-old carrot-and-stick strategy developed by David Kennedy, a professor at John Jay College in New York. In the program, law enforcement informs potential offenders with criminal records that they know of their past actions and will prosecute them to the fullest extent if they re-offend. If the subjects choose to cooperate, they are “called in” to a required meeting as part of their conditions of probation and parole and are offered job training, education, potential job placement, and health services. In New Orleans, the CeaseFire program is run under the broader umbrella of NOLA For Life, which is Mayor Landrieu’s pet project that he has funded through millions of dollars from private donors.

According to Serpas, the person who initially ran New Orleans’ social network analysis from 2013 through 2015 was Jeff Asher, a former intelligence agent who joined NOPD from the CIA. If someone had been shot, Serpas explained, Asher would use Palantir’s software to find people associated with them through field interviews or social media data. “This data analysis brings up names and connections between people on FIs [field interview cards], on traffic stops, on victims of reports, reporting victims of crimes together, whatever the case may be. That kind of information is valuable for anybody who’s doing an investigation,” Serpas said.
[…]
Of the 308 people who participated in call-ins from October 2012 through March 2017, seven completed vocational training, nine completed “paid work experience,” none finished a high school diploma or GED course, and 32 were employed at one time or another through referrals. Fifty participants were detained following their call-in, and two have since died.

By contrast, law enforcement vigorously pursued its end of the program. From November 2012, when the new Multi-Agency Gang Unit was founded, through March 2014, racketeering indictments escalated: 83 alleged gang members in eight gangs were indicted in the 16-month period, according to an internal Palantir presentation.
[…]
Call-ins declined precipitously after the first few years. According to city records, eight group call-ins took place from 2012 to 2014, but only three took place in the following three years. Robert Goodman, a New Orleans native who became a community activist after completing a prison sentence for murder, worked as a “responder” for the city’s CeaseFire program until August 2016, discouraging people from engaging in retaliatory violence. Over time, Goodman noticed more of an emphasis on the “stick” component of the program and more control over the non-punitive aspects of the program by city hall that he believes undermined the intervention work. “It’s supposed to be ran by people like us instead of the city trying to dictate to us how this thing should look,” he said. “As long as they’re not putting resources into the hoods, nothing will change. You’re just putting on Band-Aids.”

After the first two years of Palantir’s involvement with NOPD, the city saw a marked drop in murders and gun violence, but it was short-lived. Even former NOPD Chief Serpas believes that the preventative effect of calling in dozens of at-risk individuals — and indicting dozens of them — began to diminish.

“When we ended up with nearly nine or 10 indictments with close to 100 defendants for federal or state RICO violations of killing people in the community, I think we got a lot of people’s attention in that criminal environment,” Serpas said, referring to the racketeering indictments. “But over time, it must’ve wore off because before I left in August of ‘14, we could see that things were starting to slide”

Nick Corsaro, the University of Cincinnati professor who helped build NOPD’s gang database, also worked on an evaluation of New Orleans’ CeaseFire strategy. He found that New Orleans’ overall decline in homicides coincided with the city’s implementation of CeaseFire program, but the Central City neighborhoods targeted by the program “did not have statistically significant declines that corresponded with November 2012 onset date.”
[…]
The secrecy surrounding the NOPD program also raises questions about whether defendants have been given evidence they have a right to view. Sarah St. Vincent, a researcher at Human Rights Watch, recently published an 18-month investigation into parallel construction, or the practice of law enforcement concealing evidence gathered from surveillance activity. In an interview, St. Vincent said that law enforcement withholding intelligence gathering or analysis like New Orleans’ predictive policing work effectively kneecaps the checks and balances of the criminal justice system. At the Cato Institute’s 2017 Surveillance Conference in December, St. Vincent raised concerns about why information garnered from predictive policing systems was not appearing in criminal indictments or complaints.

“It’s the role of the judge to evaluate whether what the government did in this case was legal,” St. Vincent said of the New Orleans program. “I do think defense attorneys would be right to be concerned about the use of programs that might be inaccurate, discriminatory, or drawing from unconstitutional data.”

If Palantir’s partnership with New Orleans had been public, the issues of legality, transparency, and propriety could have been hashed out in a public forum during an informed discussion with legislators, law enforcement, the company, and the public. For six years, that never happened.

Source: Palantir has secretly been using New Orleans to test its predictive policing technology – The Verge

One of the big problems here is that there is no knowledge and hardly any oversight on the program. There is no knowledge if the system is being implemented fairly or cost effectively (costs are huge!) or even if it works. It seemed to have worked for a while but the effects seemed also to drop off after two years in operations, mainly because they used the “stick” method to counter crime but more and more got rid of the “carrot”. The amount of private data given to Palantir without any discussion or consent is worrying to say the least.

119,000 Passports and Photo IDs of FedEx Customers Found on Unsecured Amazon Server

Thousands of FedEx customers were exposed after the company left scanned passports, drivers licenses, and other documentation on a publicly accessible Amazon S3 server.

The scanned IDs originated from countries all over the world, including the United States, Mexico, Canada, Australia, Saudi Arabia, Japan, China, and several European countries. The IDs were attached to forms that included several pieces of personal information, including names, home addresses, phone numbers, and zip codes.

The server, discovered by researchers at the Kromtech Security Center, was secured as of Tuesday.

According to Kromtech, the server belonged to Bongo International LLC, a company that aided customers in performing shipping calculations and currency conversations, among other services. Bongo was purchased by FedEx in 2014 and renamed FedEx Cross-Border International a little over a year later. The service was discontinued in April 2017.

Source: 119,000 Passports and Photo IDs of FedEx Customers Found on Unsecured Amazon Server

2017: Dutch Military Intelligence 348 and Internal Intelligence 3205 taps placed. No idea how many the police did, but wow, that’s a lot!

De MIVD tapte vorig jaar in totaal 348 keer. De AIVD plaatste dat jaar 3.205 taps. Vandaag publiceerden beide diensten de tapstatistieken over de periode 2002 tot en met 2017 op hun website.

Source: MIVD tapte vorig jaar 348 keer | Nieuwsbericht | Defensie.nl


And of course we have no idea how many of these taps led to arrests or action.

How to Disable Facebook’s Facial Recognition Feature

To turn off facial recognition on your computer, click on the down arrow at the top of any Facebook page and then select Settings. From there, click “Face Recognition” from the left column, and then click “Do you want Facebook to be able to recognize you in photos and videos?” Select Yes or No based on your personal preferences.

On mobile, click on the three dots below your profile pic labeled “More” then select “View Privacy Shortcuts” then “More Settings,” followed by “Facial Recognition.” Click on the “Do you want Facebook to be able to recognize you in photos and videos?” button and select “No” to disable the feature.
[…]
The setting isn’t available in all countries, and will only appear as an option in your profile if you’re at least 18 years old and have the feature available to you.

Source: How to Disable Facebook’s Facial Recognition Feature

The 600+ Companies PayPal Shares Your Data With – Schneier on Security

One of the effects of GDPR — the new EU General Data Protection Regulation — is that we’re all going to be learning a lot more about who collects our data and what they do with it. Consider PayPal, that just released a list of over 600 companies they share customer data with. Here’s a good visualization of that data.

Is 600 companies unusual? Is it more than average? Less? We’ll soon know.

Source: The 600+ Companies PayPal Shares Your Data With – Schneier on Security

Madison Square Garden Has Used Face-Scanning Technology on Customers

Madison Square Garden has quietly used facial-recognition technology to bolster security and identify those entering the building, according to multiple people familiar with the arena’s security procedures.

The technology uses cameras to capture images of people, and then an algorithm compares the images to a database of photographs to help identify the person and, when used for security purposes, to determine if the person is considered a problem. The technology, which is sometimes used for marketing and promotions, has raised concerns over personal privacy and the security of any data that is stored by the system.

Source: Madison Square Garden Has Used Face-Scanning Technology on Customers

What is your personal info worth to criminals? There’s a dark web market price index for that

Your entire online identity could be worth little more than £800, according to brand new research into the illicit sale of stolen personal info on the dark web (or just $1,200 if you are in the United States, according to the US edition of the index). While it may be no surprise to learn that credit card details are the most traded, did you know that fraudsters are hacking Uber, Airbnb, Spotify and Netflix accounts and selling them for little more than £5 each?

Everything has a price on the dark web it seems. Paypal accounts with a healthy balance attract the highest prices (£280 on average). At the other end of the scale though, hacked Deliveroo or Tesco accounts sell for less than £5. Cybercriminals can easily spend more on their lunchtime sandwich than buying up stolen credentials for online shopping accounts like Argos (£3) and ASOS (£1.50).

The average person has dozens of accounts that form their online identity, all of which can be hacked and sold. Our team of security experts reviewed tens of thousands of listings on three of the most popular dark web markets, Dream, Point and Wall Street Market. These encrypted websites, which can only be reached using the Tor browser, allow criminals to anonymously sell stolen personal info, along with all sorts of other contraband, such as illicit drugs and weapons.

We focused on listings featuring stolen ID, hacked accounts and personal info relevant to the UK to create the Dark Web Market Price Index. We calculated average sale prices for each items and were shocked to see that £820 is all it would cost to buy up someone’s entire identity if they were to have all the listed items

Source: Dark Web Market Price Index (Feb 2018 – UK Edition) | Top10VPN.com

MoviePass Is Tracking Your Location

According to Media Play News, MoviePass CEO Mitch Lowe had some interesting things to say during his Hollywood presentation that took place late last week, entitled “New Oil: How Will MoviePass Monetize It?” Most notably, he openly admitted that his app tracks people’s location, even when they’re not actively using the app:

“We get an enormous amount of information… We know all about. We watch how you drive from home to the movies. We watch where you go afterwards.”

Lowe also commented on how they knew subscribers’ addresses, their demographics, and how they can track subs via the app and the phone’s GPS. This drew nervous laughter from the crowd—many of whom were MoviePass subscribers themselves—but Lowe assured them that this collecting of tracking data fits into their long-term revenue plan. He explained that their vision is to “build a night at the movies,” with MoviePass eventually directing subscribers to places to eat before movies, and places to grab drinks afterward (all for a cut from the vendors).

We knew MoviePass was collecting data on us from the start—that’s how they plan to make their money—so how is this any different? Well, subscribers are claiming they didn’t clearly disclose such persistent location tracking in their privacy policy. In regard to location tracking, the privacy policy mentions a “single request” in a section titled “Check ins” that’s used when you’re selecting a theater and movie to watch. However, the section also mentions real-time location data “as a means to develop, improve and personalize the service.” It’s a vague statement that could mean just about anything, but it’s understandable if users didn’t assume it meant watching them wherever they went, even when they’re not using the app.

Source: MoviePass Is Tracking Your Location

Retina X ‘Stalkerware’ Shuts Down Apps ‘Indefinitely’ After Getting Hacked Again

A company that sells spyware to regular consumers is “immediately and indefinitely halting” all of its services, just a couple of weeks after a new damaging hack.

Retina-X Studios, which sells several products marketed to parents and employers to keep tabs on their children and employees—but also used by jealous partners to spy on their significant others—announced that its shutting down all its spyware apps on Tuesday with a message at the top of its website.

“Regrettably Retina-X Studios, which offers cutting edge technology that helps parents and employers gather important information on devices they own, has been the victim of sophisticated and repeated illegal hackings,” read the message, which was titled “important note” in all caps.

Got a tip? You can contact Lorenzo Franceschi-Bicchierai securely on Signal on +1 917 257 1382 and Joseph Cox on Signal on +44 20 8133 5190. Details on our SecureDrop, a system to anonymously submit documents or information, can be found here.

The company sells subscriptions to apps that allow the operator to access practically anything on a target’s phone or computer, such as text messages, emails, photos , and location information. Retina-X is just one of a slew of companies that sell such services, marketing them to everyday users—as opposed to law enforcement or intelligence agencies. Some critics call these apps “Stalkerware.”

Source: ‘Stalkerware’ Seller Shuts Down Apps ‘Indefinitely’ After Getting Hacked Again – Motherboard

The Car of the Future Will Sell Your Data

Picture this: You’re driving home from work, contemplating what to make for dinner, and as you idle at a red light near your neighborhood pizzeria, an ad offering $5 off a pepperoni pie pops up on your dashboard screen.

Are you annoyed that your car’s trying to sell you something, or pleasantly persuaded? Telenav Inc., a company developing in-car advertising software, is betting you won’t mind much. Car companies—looking to earn some extra money—hope so, too.

Automakers have been installing wireless connections in vehicles and collecting data for decades. But the sheer volume of software and sensors in new vehicles, combined with artificial intelligence that can sift through data at ever-quickening speeds, means new services and revenue streams are quickly emerging. The big question for automakers now is whether they can profit off all the driver data they’re capable of collecting without alienating consumers or risking backlash from Washington.

“Carmakers recognize they’re fighting a war over customer data,” said Roger Lanctot, who works with automakers on data monetization as a consultant for Strategy Analytics. “Your driving behavior, location, has monetary value, not unlike your search activity.”

Carmakers’ ultimate objective, Lanctot said, is to build a database of consumer preferences that could be aggregated and sold to outside vendors for marketing purposes, much like Google and Facebook do today.
[…]
Telenav, the Silicon Valley company looking to bring pop-up ads to your infotainment screen, has been testing a “freemium” model borrowed from streaming music services to entice drivers to share their data.

Say you can’t afford fancy features like embedded navigation or the ability to start your car through a mobile app. The original automaker will install them for free, so long as you’re willing to tolerate the occasional pop-up ad while idling at a red light. Owners of luxury cars won’t have to suffer such indignities, since the higher price tag paid likely would have already included an internet connection.
[…]
The pop-up car ads could generate an average of $30 annually per vehicle, to be split between Telenav and the automaker. He declined to say whether anyone has signed up for the software, which was just unveiled at CES, but added Telenav is in “deep discussions” with several manufacturers. Because of the long production cycles of the industry, it’ll be about three years before the ads will show up in new models.

Source: The Car of the Future Will Sell Your Data – Bloomberg

of course they bring in the fear factor, they wouldn’t be honest and talk about the profit factor. As soon as people start trying to scare you, you know they are trying to con you.

Auto executives emphasize that data-crunching will allow them to build a better driving experience—enabling cars to predict flat tires, find a parking space or charging station, or alert city managers to dangerous intersections where there are frequent accidents. Data collection could even help shield drivers from crime, Ford Motor Co.’s chief executive officer said last month at the CES technology trade show.

“If a robber got in the car and took off, would you want us to know where that robber went to catch him?” Jim Hackett asked the audience during a keynote in Las Vegas. “Are you willing to trade that?”

You spend huge amounts on a car, I really really don’t want it sending information back to the maker, much less having the maker sell that data!

Roses are red, Facebook is blue. Think private means private? More fool you

In a decision (PDF) handed down yesterday, chief judge Janet DiFiore said that a court could ask someone to hand over any relevant materials as part of discovery ahead of a trial – even if they are private.

The threshold for disclosure in a court case “is not whether the materials sought are private but whether they are reasonably calculated to contain relevant information”, she said.

The ruling is the latest in an ongoing battle over whether a woman injured in a horse-riding accident should hand over privately posted pictures to the man she has accused of negligence in the accident.

Kelly Forman suffered spinal and brain injuries after falling from a horse owned by Mark Henkins, who she accuses of fitting her with a faulty stirrup.

Forman said the accident had led to memory loss and difficulty communicating, which she said caused her to become reclusive and have problems using a computer or composing coherent messages.

Because Forman said she had been a regular Facebook user before the accident, Henkins sought an order to gain access to posts and photos she made privately on Facebook before and after the accident, saying this would provide evidence on how her lifestyle had been affected.

For instance, the court noted he argued that “the timestamps on Facebook messages would reveal the amount of time it takes the plaintiff to write a post or respond to a message”.
[…]
The judge acknowledged Forman’s argument that disclosure of social media materials posted under private settings was an “unjustified invasion of privacy”, but said that other private materials relevant to litigation – including medical records – can be ordered for disclosure.

DiFiore also noted that, although the court was assuming, for the purposes of resolving the case, that setting a post to “private” meant that the they should be characterised as such, there was “significant controversy” about this.

“Views range from the position taken by plaintiff that anything shielded by privacy settings is private, to the position taken by one commentator that anything contained in a social media website is not ‘private’,” she pointed out in a footnote.

Source: Roses are red, Facebook is blue. Think private means private? More fool you • The Register

Thanks to “consent” buried deep in sales agreements, car manufacturers are tracking tens of millions of US and EU cars

Millions of new cars sold in the US and Europe are “connected,” having some mechanism for exchanging data with their manufacturers after the cars are sold; these cars stream or batch-upload location data and other telemetry to their manufacturers, who argue that they are allowed to do virtually anything they want with this data, thanks to the “explicit consent” of the car owners — who signed a lengthy contract at purchase time that contained a vague and misleading clause deep in its fine-print.

Car manufacturers are mostly warehousing this data (leaving it vulnerable to leaks and breaches, search-warrants, government hacking and unethical employee snooping), and can’t articulate why they’re saving it or how they use it.

Much of this data ends up in “marketplaces” where data-sets from multiple auto-makers are merged, made uniform, and given identifiers that allow them to be cross-referenced with the massive corporate data-sets that already exist, and then offered on the open market to any bidder.

Source: Thanks to “consent” buried deep in sales agreements, car manufacturers are tracking tens of millions of US cars / Boing Boing

Microsoft whips out tool so you can measure Windows 10’s data-slurping creepiness

The software giant has produced a tool that’s claimed to show users how much personal information its Windows 10 operating system collects and sends back to Redmond for diagnostics.The application is dubbed Diagnostic Data Viewer, and is free from the Windows Store. It reveals that stuff like the computer’s device name, OS version, and serial number, as well as more detailed records such as installed apps, preference settings, and details on each application’s usage, are beamed back to Microsoft.
[…]
Microsoft says the Diagnostic Data Viewer will run separately from the Windows Privacy Dashboard that is bundled with Windows 10. That app will also be upgraded to provide users with more information on data collection, including activity history for the user’s Microsoft account.

Microsoft is also planning an update to the app to allow users to export dashboard reports, view media consumption information, and delete reported data (for some reason this isn’t already allowed).

The Dashboard and Data Viewer apps arrive after Microsoft was taken to task by governments for what many saw as overly intrusive data collection by Windows 10.

Source: Microsoft whips out tool so you can measure Windows 10’s data-slurping creepiness • The Register

US House reps green-light Fourth Amendment busting spy program

The US House of Representatives has passed a six-year extension to the controversial Section 702 spying program, rejecting an amendment that would have required the authorities to get a warrant before searching for information on US citizens.

The 256-164 vote effectively retains the status quo and undermines a multi-year effort to bring accountability to a program that critics argue breaks the Constitution. A bipartisan substitute amendment put forward by House reps Justin Amash (R-MI) and Zoe Lofgren (D-CA) and supported by both ends of the political spectrum was defeated 233-183.< [...] The already tense atmosphere in Washington DC over the issue was heightened when President Trump tweeted his apparent support of critics of the program just moments after the Amash-Lofgren amendment was discussed on Fox News./blockquote>

Source: US House reps green-light Fourth Amendment busting spy program • The Register

OnePlus Android mobes’ clipboard app caught phoning home to China

OnePlus has admitted that the clipboard app in a beta build of its Android OS was beaming back mystery data to a cloud service in China.

Someone running the latest test version of OnePlus’s Oreo-based operating system revealed in its support forums that unusual activity from the builtin clipboard manager had been detected by a firewall tool.

Upon closer inspection, the punter found that the app had been transmitting information to a block of IP addresses registered to Alibaba, the Chinese e-commerce and cloud hosting giant.
[…]
This should not come as much of a shock to those who follow the China-based OnePlus. In October last year, researchers discovered that OnePlus handsets were collecting unusually detailed reports on user activities, although the manufacturer said at the time it was only hoarding the data for its internal analytics. One month later, it was discovered that some phones had apparently been shipped with a developer kit left active, resulting in the phones sporting a hidden backdoor.

And lest we forget, today’s desktop and mobile operating systems are pretty gung-ho in phoning home information about their users, with Microsoft catching flak for Windows 10 telemetry in particular. ®

Source: OnePlus Android mobes’ clipboard app caught phoning home to China

What’s Slack Doing With Your Data?

More than six million people use Slack daily, spending on average more than two hours each day inside the chat app. For many employees, work life is contingent on Slack, and surely plenty of us use it for more than just, say, work talk. You probably have a #CATS and a women-only channel, and you’ve probably said something privately that you wouldn’t want shared with your boss. But that’s not really up to you.

When you want to have an intimate or contentious chat, you might send a direct message. Or perhaps you and a few others have started a private channel, ensuring that whatever you say is only seen by a handful of people. This may feel like a closed circuit between you and another person—or small group of people—but that space and the little lock symbol aren’t actually emblematic of complete privacy.

Do Slack employees have access to your chats? The short answer is: sort of. The long answer is… below. Can your company peek at your private DMs? It’s entirely possible. Slack’s FAQ pages help elucidate some of these concerns, but at times the answers are frustratingly vague and difficult to navigate. So we dug into it for you. Read more to find out what Slack—and your company—is actually doing with your data.

Source: What’s Slack Doing With Your Data?

The short is:
Yes, there are slack employees that can view your data. Channel owners can see everything in a channel, also direct messages. Slack gives your data to law enforcement upon request and won’t inform you. They don’t (and say won’t) sell it to third parties. Deletion is deletion. Slack, like any other company, can be hacked. Caveat emptor.

How to Stop Apps From Listening in on Your TV Habits (it turns out thousands are)

That innocent-looking mobile game you just downloaded might just have an ulterior motive. Behind the scenes, hundreds of different apps could be using your smartphone’s microphone to figure out what you watch on TV, a new report from The New York Times reveals.
[…]
All of these apps need to get your permission before they can record in the background. So the easiest way is just to deny that permission. However, it’s possible that you might approved the request without realizing it, or your kid might do it while playing with your phone. In that case, switching it off is pretty easy.

Just head into Settings on your device and check the permissions for the app in question. If the app has microphone access when it doesn’t need to (why would a bowling game need to use your microphone?), just toggle that permission off.

Source: How to Stop Apps From Listening in on Your TV Habits

Web trackers exploit browser login managers

First, a user fills out a login form on the page and asks the browser to save the login. The tracking script is not present on the login page [1]. Then, the user visits another page on the same website which includes the third-party tracking script. The tracking script inserts an invisible login form, which is automatically filled in by the browser’s login manager. The third-party script retrieves the user’s email address by reading the populated form and sends the email hashes to third-party servers.

We found two scripts using this technique to extract email addresses from login managers on the websites which embed them. These addresses are then hashed and sent to one or more third-party servers. These scripts were present on 1110 of the Alexa top 1 million sites. The process of detecting these scripts is described in our measurement methodology in the Appendix 1. We provide a brief analysis of each script in the sections below.

Source: No boundaries for user identities: Web trackers exploit browser login managers

Ghostery, uBlock, Privacy Badger lead the anti-tracking browser extensions

A group of researchers in France and Japan say RequestPolicyContinued and NoScript have the toughest policies, while Ghostery and uBlock Origin offer good blocking performance and a better user experience.

The study also gave a nod to the EFF’s Privacy Badger, which uses heuristics rather than block lists, but once trained is nearly as good as Ghostery or uBlock, demonstrating that its heuristics are reliable.

Source: Ghostery, uBlock lead the anti-track pack

How to Track a Cellphone Without GPS—or Consent

Using only data that can be legally collected by an app developer without the consent of a cellphone’s owner, researchers have been able to produce a privacy attack that can accurately pinpoint a user’s location and trajectory without accessing the device’s Global Position System—GPS. And while the ramifications of this ability falling into the wrong hands are distressing, the way in which they pulled it off is nothing short of genius.
[…]
In fact, all you really need is your phone’s internal compass, an air pressure reading, a few free-to-download maps, and a weather report.

Your cellphone comes equipped with an amazing array of compact sensors that are more or less collecting information about your environment at all time. An accelerometer can tell how fast you’re moving; a magnetometer can detect your orientation in relation to true north; and a barometer can measure the air pressure in your surrounding environment. You phone also freely offers up a slew of non-sensory data such as your device’s IP address, timezone, and network status (whether you’re connected to Wi-Fi or a cellular network.)

All of this data can be accessed by any app you download without the type of permissions required to access your contact lists, photos, or GPS. Combined with publicly available information, such as weather reports, airport specification databases, and transport timetables, this data is enough to accurately pinpoint your location—regardless of whether you’re walking, traveling by plane, train, or automobile.
[…]
To track a user, you first need to determine what kind of activity they’re performing. It’s easy enough to tell if a person is walking versus riding in a car, speed being the discriminant factor; but also, when you’re walking you tend to move in one direction, while your phone is held in a variety of different positions. In a car, you make sudden stops (when you brake) and specific types of turns—around 90 degrees—that can be detected using your phone’s magnetometer. People who travel by plane will rapidly change time zones; the air pressure on a plane also changes erratically, which can be detected by a cellphone’s barometer. When you ride a train, you tend to accelerate in a direction that doesn’t significantly change. In other words, determining your mode of travel is relatively simple.

The fact that your cellphone offers up your time zone as well as the last IP address you were connected to really narrows things down—geolocating IP addresses is very easy to do and can at least reveal the last city you were in—but to determine your exact location, with GPS-like precision, a wealth of publicly-available data is needed. To estimate your elevation—i.e., how far you are above sea level—PinMe gathers air pressure data provided freely by the Weather Channel and compares it to the reading on your cellphone’s barometer. Google Maps and open-source data offered by US Geological Survey Maps also provide comprehensive data regarding changes in elevation across the Earth’s surface. And we’re talking about minor differences in elevation from one street corner to the next.

Upon detecting a user’s activity (flying, walking, etc.) the PinMe app uses one of four algorithms to begin estimating a user’s location, narrowing down the possibilities until its error rate drops to zero, according to the peer-reviewed research. Let’s say, the app decides you’re traveling by car. It knows your elevation, it knows your timezone, and if you haven’t left the city you’re in since you last connected to Wi-Fi, you’re pretty much borked.

With access to publicly available maps and weather reports, and a phone’s barometer and magnetometer (which provides a heading), it’s only a matter of turns. When PinMe detected one of the researchers driving in Philadelphia during a test-run, for example, the researcher only had to make 12 turns before the app knew exactly where they were in the city. With each turn, the number of possible locations of the vehicles dwindles. “[A]s the number of turns increases, PinMe collects more information about the user’s environment, and as a result it is more likely to find a unique driving path on the map,” the researchers wrote.

Source: How to Track a Cellphone Without GPS—or Consent