Facebook on Tuesday disclosed that as many as 100 software developers may have improperly accessed user data, including the names and profile pictures of people in specific groups on the social network.
The company recently discovered that some apps retained access to this type of user data despite making changes to its service in April 2018 to prevent this, Facebook said in a blog post. The company said it has removed this access and reached out to 100 developer partners who may have accessed the information. Facebook said that at least 11 developer partners accessed this type of data in the last 60 days.
“Although we’ve seen no evidence of abuse, we will ask them to delete any member data they may have retained and we will conduct audits to confirm that it has been deleted,” the company said in the blog post.
The company did not say how many users were affected.
Facebook has been restricting software developer access to its user data following reports in March 2018 that political consulting firm Cambridge Analytica had improperly accessed the data of 87 million Facebook users, potentially to influence the outcome of the 2016 U.S. presidential election.
Click “Activity controls” from the left-hand sidebar.
Scroll down to the data type you wish to manage, then select “Manage Activity.”
On this next page, click on “Choose how long to keep” under the calendar icon.
Select the auto-deletion time you wish (three or 18 months), or you can choose to delete your data manually.
Click “Next” to save your changes.
Repeat these steps for each of the types of data you want to be auto-deleted. For your Location History in particular, you’ll need to click on “Today” in the upper-left corner first, and then click on the gear icon in the lower-right corner of your screen. Then, select “Automatically delete Location History,” and pick a time.
The vast majority of technology, media and telecom (TMT) companies want to monetise customer data, but are concerned about regulations such as Europe’s GDPR, according to research from law firm Simmons & Simmons.
The outfit surveyed 350 global business leaders in the TMT sector to understand their approach to data commercialisation. It found that 78 per cent of companies have some form of data commercialisation in place but only 20 per cent have an overarching plan for its use.
Alex Brown, global head of TMT Sector at Simmons & Simmons, observed that the firm’s clients are increasingly seeking advice on the legal ways they can monetise data. He said that can either be for internal use, how to use insights into customer behaviour to improve services, or ways to sell anonymised data to third parties.
One example of data monetisation within the sector is Telefónica’s Smart Steps business, which uses “fully anonymised and aggregated mobile network data to measure and compare the number of people visiting an area at any time”.
That information is then sold on to businesses to provide insight into their customer base.
Brown said: “All mobile network operators know your location because the phone is talking to the network, so through that they know a lot about people’s movement. That aggregated data could be used by town planners, transport networks, retailers work out best place to site new store.”
However, he added: “There is a bit of a data paralysis at the moment. GDPR and what we’ve seen recently in terms of enforcement – albeit related to breaches – and the Google fine in France… has definitely dampened some innovation.”
Earlier this year France’s data protection watchdog fined Google €50m for breaching European Union online privacy rules, the biggest penalty levied against a US tech giant. It said Google lacked transparency and clarity in the way it informs users about its handling of personal data and failed to properly obtain their consent for personalised ads.
But Brown pointed out that as long as privacy policies are properly laid out and the data is fully anonymised, companies wanting to make money off data should not fall foul of GDPR.
A confidential Sidewalk Labs document from 2016 lays out the founding vision of the Google-affiliated development company, which included having the power to levy its own property taxes, track and predict people’s movements and control some public services.
The document, which The Globe and Mail has seen, also describes how people living in a Sidewalk community would interact with and have access to the space around them – an experience based, in part, on how much data they’re willing to share, and which could ultimately be used to reward people for “good behaviour.”
Known internally as the “yellow book,” the document was designed as a pitch book for the company, and predates Sidewalk’s relationship and formal agreements with Toronto by more than a year. Peppered with references to Disney theme parks and noted futurist Buckminster Fuller, it says Sidewalk intended to “overcome cynicism about the future.”
But the 437-page book documents how much private control of city services and city life Google parent company Alphabet Inc.’s leadership envisioned when it created the company,
[…]
“The ideas contained in this 2016 internal paper represent the result of a wide-ranging brainstorming process very early in the company’s history,” Sidewalk spokesperson Keerthana Rang said. “Many, if not most, of the ideas it contains were never under consideration for Toronto or discussed with Waterfront Toronto and governments. The ideas that we are actually proposing – which we believe will achieve a new model of inclusive urban growth that makes housing more affordable for families, creates new jobs for residents, and sets a new standard for a healthier planet – can all be found at sidewalktoronto.ca.”
[…]
To carry out its vision and planned services, the book states Sidewalk wanted to control its area much like Disney World does in Florida, where in the 1960s it “persuaded the legislature of the need for extraordinary exceptions.” This could include granting Sidewalk taxation powers. “Sidewalk will require tax and financing authority to finance and provide services, including the ability to impose, capture and reinvest property taxes,” the book said. The company would also create and control its own public services, including charter schools, special transit systems and a private road infrastructure.
Sidewalk’s early data-driven vision also extended to public safety and criminal justice.
The book mentions both the data-collection opportunities for police forces (Sidewalk notes it would ask for local policing powers similar to those granted to universities) and the possibility of “an alternative approach to jail,” using data from so-called “root-cause assessment tools.” This would guide officials in determining a response when someone is arrested, such as sending someone to a substance abuse centre. The overall criminal justice system and policing of serious crimes and emergencies would be “likely to remain within the purview of the host government’s police department,” however.
Data collection plays a central role throughout the book. Early on, the company notes that a Sidewalk neighbourhood would collect real-time position data “for all entities” – including people. The company would also collect a “historical record of where things have been” and “about where they are going.” Furthermore, unique data identifiers would be generated for “every person, business or object registered in the district,” helping devices communicate with each other.
There would be a quid pro quo to sharing more data with Sidewalk, however. The document describes a tiered level of services, where people willing to share data can access certain perks and privileges not available to others. Sidewalk visitors and residents would be “encouraged to add data about themselves and connect their accounts, either to take advantage of premium services like unlimited wireless connectivity or to make interactions in the district easier,” it says.
Shoshana Zuboff, the Harvard University professor emerita whose book The Age of Surveillance Capitalism investigates the way Alphabet and other big-tech companies are reshaping the world, called the document’s revelations “damning.” The community Alphabet sought to build when it launched Sidewalk Labs, she said, was like a “for-profit China” that would “use digital infrastructure to modify and direct social and political behaviour.”
While Sidewalk has since moved away from many of the details in its book, Prof. Zuboff contends that Alphabet tends to “say what needs be said to achieve commercial objectives, while specifically camouflaging their actual corporate strategy.”
[…]
hose choosing to remain anonymous would not be able to access all of the area’s services: Automated taxi services would not be available to anonymous users, and some merchants might be unable to accept cash, the book warns.
The document also describes reputation tools that would lead to a “new currency for community co-operation,” effectively establishing a social credit system. Sidewalk could use these tools to “hold people or businesses accountable” while rewarding good behaviour, such as by rewarding a business’s good customer service with an easier or cheaper renewal process on its licence.
This “accountability system based on personal identity” could also be used to make financial decisions.
“A borrower’s stellar record of past consumer behaviour could make a lender, for instance, more likely to back a risky transaction, perhaps with the interest rates influenced by digital reputation ratings,” it says.
The company wrote that it would own many of the sensors it deployed in the community, foreshadowing a battle over data control that has loomed over the Toronto project.
Facebook has ended its appeal against the UK Information Commissioner’s Office and will pay the outstanding £500,000 fine for breaches of data protection law relating to the Cambridge Analytica scandal.
Prior to today’s announcement, the social network had been appealing against the fine, alleging bias and requesting access to ICO documents related to the regulator’s decision making. The ICO, in turn, was appealing a decision that it should hand over these documents.
The issue for the watchdog was the misuse of UK citizens’ Facebook profile information, specifically the harvesting and subsequent sale of data scraped from their profiles to Cambridge Analytica, the controversial British consulting firm used by US prez Donald Trump’s election campaign.
The app that collected the data was “thisisyourdigitallife”, created by Cambridge developer Aleksandr Kogan. It hoovered up Facebook users’ profiles, dates of birth, current city, photos in which those users were tagged, pages they had liked, posts on their timeline, friends’ lists, email addresses and the content of Facebook messages. The data was then processed in order to create a personality profile of the user.
“Given the way our platform worked at the time,” Zuck has said, “this meant Kogan was able to access tens of millions of their friends’ data”. Facebook has always claimed it learned of the data misuse from news reports, though this has been disputed.
Both sides will now end the legal fight and Facebook will pay the ICO a fine but make no admission of liability or guilt. The money is not kept by the data protection watchdog but goes to the Treasury consolidated fund and both sides will pay their own costs. The ICO spent an eye-watering £2.5m on the Facebook probe.
VP of product Scott Williamson announced on 10 October that “to make GitLab better faster, we need more data on how users are using GitLab”.
GitLab is a web application that runs on Linux, with options for self-hosting or using the company’s cloud service. It is open source, with both free and licensed editions.
Williamson said that while nothing was changing with the free self-hosted Community Edition, the hosted and licensed products would all now “include additional JavaScript snippets (both open source and proprietary) that will interact with both GitLab and possibly third-party SaaS telemetry services (we will be using Pendo)”. The only opt-out was to be support for the Do Not Track browser mechanism.
GitLab customers and even some staff were not pleased. For example, Yorick Peterse, a GitLab staff developer, said telemetry should be opt-in and that the requisite update to the terms of service would break some API usage (because bots do not know how to accept terms of service), adding: “We have plenty of customers who would not be able to use GitLab if it starts tracking data for on-premises installations.”
There is more background in the issue here, which concerns adding the identity of the user to the Snowplow analytics service used by GitLab.
“This effectively changes our Snowplow integration from being an anonymous aggregated thing to a thing that tracks user interaction,” engineering manager Lukas Eipert said back in July. “Ethically, I have problems with this and legally this could have a big impact privacy wise (GDPR). I hereby declare my highest degree of objection to this change that I can humanly express.”
On the other hand, GitLab CFO Paul Machle said: “This should not be an opt in or an opt out. It is a condition of using our product. There is an acceptance of terms and the use of this data should be included in that.”
On 23 October, an email was sent to GitLab customers announcing the changes.
Yesterday, however, CEO Sid Sijbrandij put the plans on hold, saying: “Based on considerable feedback from our customers, users, and the broader community, we reversed course the next day and removed those changes before they went into effect. Further, GitLab will commit to not implementing telemetry in our products that sends usage data to a third-party product analytics service.” Sijbrandij also promised a review of what went wrong. “We will put together a new proposal for improving the user experience and share it for feedback,” he said.
Despite this embarrassing backtrack, the incident has demonstrated that GitLab does indeed have an open process, with more internal discussion on view than would be the case with most companies. Nevertheless, the fact that GitLab came so close to using personally identifiable tracking without specific opt-in has tarnished its efforts to appear more community-driven than alternatives like Microsoft-owned GitHub. ®
Google’s Senior Vice President of Devices & Services, Rick Osterloh, broke the news on the official Google blog, saying:
Over the years, Google has made progress with partners in this space with Wear OS and Google Fit, but we see an opportunity to invest even more in Wear OS as well as introduce Made by Google wearable devices into the market. Fitbit has been a true pioneer in the industry and has created engaging products, experiences and a vibrant community of users. By working closely with Fitbit’s team of experts, and bringing together the best AI, software and hardware, we can help spur innovation in wearables and build products to benefit even more people around the world.
Earlier this week, on October 28, a report from Reuters surfaced to indicate that Google was in a bid to purchase Fitbit. It’s a big move, but it’s also one that makes good sense.
Google’s Wear OS wearable platform has been in something of a rut for the last few years. The company introduced the Android Wear to Wear OS rebrand in 2018 to revitalize its branding/image, but the hardware offerings have still been pretty ho-hum. Third-party watches like the Fossil Gen 5 have proven to be quite good, but without a proper “Made by Google” smartwatch and other major players, such as Samsung, ignoring the platform, it’s been left to just sort of exist.
The UK government could use facial recognition to verify the age of Brits online “so long as there is an appropriate concern for privacy,” junior minister for Digital, Culture, Media and Sport Matt Warman said.
The minister was responding to an urgent Parliamentary question directed to Culture Secretary Nicky Morgan about the future of Blighty’s online age-verification system, following her announcement this week that the controversial project had been dropped. He indicated the government is still keen to shield kids from adult material online, one way or another.
“In many ways, this is a technology problem that requires a technology solution,” Warman told the House of Commons on Thursday.
“People have talked about whether facial recognition could be used to verify age, so long as there is an appropriate concern for privacy. All of these are things I hope we will be able to wrap up in the new approach, because they will deliver better results for consumers – child or adult alike.”
The government also managed to spend £2.2m on the aforementioned-and-now-shelved proposal to introduce age-verification checks on netizens viewing online pornography, Warman admitted in his response.
For years I’ve gone back and forth over the practice of obscuring license plates on photos on the internet. License plates are already publicly-viewable things, so what’s the point in obscuring them, right? Well, now I think there actually is a good reason to obscure your license plates in photos because it appears that Google and Facebook are actually reading the plates in photos, and then making the actual license plate alphanumeric sequence searchable. I tested it. It works.
Starting with Google, the way this works is to search for the license plate number using Google Images. That’s it.
In my testing, I started with my own cars that I know have had images of their license plates in Jalopnik articles. For my Nissan Pao, a search of my license plate number brings up an image of my car, from one of my articles, as the first result:
It’s worth noting that the image search results aren’t even trying to differentiate the search term as a license plate; the number sequence has just been tagged to the photo automatically after whatever hidden Google OCR system reads the license plate. This can mean that someone searching a similar sequence of characters could likely end up with a result for your car if enough of those characters match your license plate.
[…]
I just checked a test I did on Facebook earlier today to see if they’re reading and tagging license plates, and, yep, it appears they are:
So, people can type your license plate into Facebook and, if it’s visible in any of your photos, it seems like it’ll show up! Great for you budding stalkers out there!
The takeaway here is that you should just assume your license plate is known and tagged to pictures of your car. Even if you obscure your plate in every image you yourself post, there’s no way to know what images your car and its license plate may be in the background of, meaning if it’s not searchable yet, it likely will be.
I suppose the positive side is that if you see a hit and run or someone’s blocking you in, it’s a lot easier to find out who’s being the jerk. On the negative side, it’s just a reminder that privacy in so many ways is eroding away, and there’s damn little we can do about it.
Today, when you use Wizards Unite or Pokémon Go or any of Niantic’s other apps, your every move is getting documented and stored—up to 13 times a minute, according to the results of a Kotaku investigation. Even players who know that the apps record their location data are usually astonished once they look at just how much they’ve told Niantic about their lives through their footsteps.
For years, users of these technologists’ products—from Google Street View to Pokémon Go—have been questioning how far they’re going with users’ information and whether those users are adequately educated on what they’re giving up and with whom it’s shared. In the process, those technologists have made mistakes, both major and minor, with regards to user privacy. As Niantic summits the world of augmented reality, it’s engineering that future of that big-money field, too. Should what Niantic does with its treasure trove of valuable data remain shrouded in the darkness particular to up-and-coming Silicon Valley darlings, that opacity might become so normalized that users lose any expectation of knowing how they’re being profited from.
Niantic publicly describes itself as a gaming company with an outsized passion for getting gamers outside. Its games, from Ingress to Pokémon Go to Wizards Unite, encourage players to navigate and interact with the real world around them, whether it be tree-lined suburbs, big cities, local landmarks, the Eiffel Tower, strip malls, or statues in the town square. Niantic’s ever-evolving gaming platform closely resembles Google Maps, in part because Niantic spawned from just that.
[…]
At 2019’s GDC, Hanke showed a video titled “Hyper-Reality,” by the media artist Keiichi Matsuda. It’s a dystopian look at a future in which the entire world is slathered with virtual overlays, an assault on the senses that everyone must view through an AR headset if they want to participate in modern society. In the video, the protagonist’s entire field of vision is a spread of neon notifications, apps, and advertisements, all viewed from a seat at the back of a city bus. Their hands swipe across a game they’re playing in augmented reality, while in the background an ad for Starbucks Coffee indicates they won a coupon for a free cup. Push notifications in their periphery indicate three new messages and directions for where to exit the bus. Walking through the aisle, where digital “get off now!” signs indicate it’s their stop, and onto the street, the physical world is annotated with virtual information. The more tasks they accomplish, the more points they receive. The whole world is now one big game. It showed a definitively dystopian vision of a world in which the barriers between IRL and URL have been fully collapsed.
Hanke said that the video made him feel “stressed and nervous.” Calling it a work of “critical design,” he noted that it was meant to question this dystopian future for AR, “a world where you’re tracked everywhere you go, where giant companies know everything about you, your identity is constantly at stake, and the world itself is noisy, and busy and plastered with distractions.”
But when a path appeared in front of the video’s protagonist showing them where to walk, Hanke’s response was: “That looks helpful.”
“Some people would say AR is a bad thing because we’ve seen this vision of how bad it can be,” Hanke said. “The point I want to make to you all is, it doesn’t have to be that way.” He showed an image of the Ferry Building, the 120-year-old piece of classical revival architecture in San Francisco where the company is currently headquartered. Just like in the video, it was overlaid with augmented reality windows showing the building’s history, a public transit schedule, and tabs for nearby restaurants. Hanke described a world where people can better navigate public transit and understand their surroundings because of digital mapping initiatives like Niantic. He talked about the possibility of hologram tour guides in San Francisco, and how they’d rely on a digital map to navigate their surroundings, and about designing shared experiences of Pokémon games in a Pokémon-augmented world.
[…]
Since its 2016 release, Pokémon Go has netted over $2.3 billion. In it, players collect items from PokeStops—also real-life locations and landmarks—so they can catch and collect Pokémon, which spawn around them. Almost immediately, Pokémon Go sparked its own privacy controversy, also blamed on a bug, which involved users giving Niantic a huge number of permissions: contacts, location, storage, camera and, for iPhone users, full Google account access, which was not integral to gameplay. Minnesota senator Al Franken penned a strongly-worded letter to Niantic about it, expressing concern “about the extent to which Niantic may be unnecessarily collecting, using, and sharing a wide range of users’ personal information without their appropriate consent.” Niantic said that the “account creation process on iOS erroneously requests full access permission,” adding that Pokémon Go only got user ID and email address info.
[…]
Players give Wizards Unite permission to track their movement using a combination of GPS, Wi-Fi, and mobile cell tower triangulation. To understand the extent of this location data, Kotaku asked for data from European players who had all filed personal information requests to Niantic under the GDPR, the European digital privacy legislation designed to give EU citizens more control over their personal data. Niantic sent these players all the data it had on them, which the players then shared with Kotaku.
The files we received contained detailed information about the lives of these players: the number of calories they likely burned during a given session, the distance they traveled, the promotions they engaged with. Crucially, each request also contained a large file of timestamped location data, as latitudes and longitudes.
In total, Kotaku analyzed more than 25,000 location records voluntarily shared with us by 10 players of Niantic games. On average, we found that Niantic kept about three location records per minute of gameplay of Wizards Unite, nearly twice as many as it did with Pokémon Go. For one player, Niantic had at least one location record taken during nearly every hour of the day, suggesting that the game was collecting data and sharing it with Niantic even when the player was not playing.
When Kotaku first asked Niantic why Wizards Unite was collecting location data even while the game was not actively being played, its first response was that we must be mistaken, since the game, it said, did not collect data while backgrounded. After we provided Niantic with more information about that player, it got back to us a few days later to let us know that its engineering team “did identify a bug in the Android version of the client code that led it to continue to ping our servers intermittently when the app was still open but had been backgrounded.” The bug, Niantic said, has now been fixed.
Because the location data collected by Wizards Unite and sent to Niantic is so granular, sometimes up to 13 location records a minute, it is possible to discern individual patterns of user behavior as well as intimate details about a player’s life.
[…]
Niantic is far from the only company collecting this sort of data. Last year, the New York Times published an expose on how over 75 companies receive pinpoint-accurate, anonymous location data from phone apps on over 200 million devices. Sometimes, these companies tracked users’ locations over 14,000 times a day. The result was always the same: Even though users had signed away their location data to these companies by agreeing to their user agreements, a lot of the time, they generally had no idea that companies were taking such exhaustive notes on what kind of person they are, where they’d been, where they were likely to go next, and whether they’d buy something there.
That Niantic is yet another company that can infer this type of mundane personal information may not be, in itself, surprising. Credit card companies, email providers, cellular services, and a variety of data brokers all have access to your personal information in increasingly opaque ways. Remember when Target figured out that a high school girl was pregnant before her family did?
It’s important to note that the personal data that players requested from Niantic and voluntarily shared with Kotaku is, according to Niantic, not something that a third party could buy from them, or otherwise be allowed to see. “Niantic does not share individual player data with third party sponsored location partners,” a representative said, adding that it uses “additional mechanisms to process the data so that it cannot be connected to an individual.”
Niantic’s Kawai told Kotaku that the anonymized data that Niantic shares with third parties is only in the form of “aggregated stats,” such as “how many people have had access or went to those in-game locations and how many actions people take in those in-game locations, how many PokeStop spins to get items happened on that day and… what unique number of people went to that location.”
“We don’t go any further than that,” he said.
The idea that data can successfully be anonymized has long been a contentious one. In July, researchers at Imperial College London were able to accurately reidentify 99.98 percent of Americans in an “anonymized” dataset. And in 2018, a New York Times investigation found that, when provided raw anonymized location data, companies could identify individuals with or without their consent. In fact, according to experts, it can take just four timestamped location records to specifically identify an individual from a collection of latitudes and longitudes that they have visited.
[…]
Niantic makes a staggering amount of money off in-game microtransactions, a reported $1.8 billion in Pokémon Go’s first two years. It also makes money from sponsorships. By late 2017, there were over 35,000 sponsored PokeStops, which players visited over 500 million times. Hanke described foot traffic as the “holy grail of retail businesses” in a 2017 talk to the Mobile World Congress. 13,000 of the sponsored stops were Starbucks locations.
[…]
“We have always been transparent about this product and feel it is a much better experience for our players than the kind of video and text ads frequently deployed in other mobile games,” Hanke told Kotaku. He then shared a link to an Ad Age article announcing Pokémon Go’s sponsored locations and detailing its “cost per visit” business model.
Big-money tech companies rarely make money in just one or two ways, and often inconspicuously employ money-making strategies that may be less palatable to privacy-minded consumers. Mobile app companies are notorious for this. One 2017 Oxford study, for example, analyzed 1 million smartphone apps and determined that the median Google Play Store app can share users’ behavioral data with 10 third parties, while one in five can share it with over 20. “Freemium” mobile apps can earn big revenue from sharing data with advertisers—and it’s all completely opaque to users, as a Buzzfeed News report explained in 2018.
A graph illustrating the number of location records captured for one Harry Potter: Wizards Unite user per minute, over the span of a few hours.
Image: Kotaku
Advertising market research company Emarketer projected that advertisers will spend $29 billion on location-targeted advertising, also referred to as “geoconquesting,” this year. Marketers target and tailor ads for app users in a specific location in real-time, segment a potential audience for an ad by location, learn about consumers based on where they were before they bought something, and connect online ads to offline purchases using location data—another manifestation of “ubiquitous computing.” One of the biggest location-targeted ad companies, GroundTruth, taps data from 120 million unique monthly users to drive people to businesses like Taco Bell, where it recently took credit for 170,000 visits after a location-targeted ad campaign.
[…]
Niantic said it is not in the business of selling user location data. But it will send its users to you. Wizards Unite recently partnered with Simon Malls, which owns over 200 shopping centers, to add “multiple sponsored Inns and Fortresses” at each location, “giving players more XP and more spell energy than at any other non-sponsored location in the U.S.”
[…]
If the goal is to unite the physical with the digital, insights gleaned from how long users loiter outside a Coach store and how long they might look at a Coach Instagram ad could be massively useful to these waning mall brands. Uniting these worlds for a field trip around Tokyo is one thing; uniting them to consolidate digital and physical ad profiles is another.
“This is a hot topic in mall operation—tracking the motion of people within a mall, what stores they’re going to, how long they’re going,” said Ron Merriman, a theme park business strategist based in Shanghai (who, he noted after we contacted him for this story, happened to go to business school with Hanke). Merriman says that tracking users in malls, aquariums, and theme parks to optimize merchandising, user experiences, and ad targeting is becoming the norm where he lives in Asia. Retailers polled by Emarketer in late 2018 planned on investing more in proximity and location-based marketing than other emerging, hot-topic technologies like AI.
Apple admits that it sends some user IP addresses to Tencent in the “About Safari & Privacy” section of its Safari settings which can be accessed on an iOS device by opening the Settings app and then selecting “Safari > About Privacy & Security.” Under the title “Fraudulent Website Warning,” Apple says:
“Before visiting a website, Safari may send information calculated from the website address to Google Safe Browsing and Tencent Safe Browsing to check if the website is fraudulent. These safe browsing providers may also log your IP address.”
The “Fraudulent Website Warning” setting is toggled on by default which means that unless iPhone or iPad users dive two levels deep into their settings and toggle it off, their IP addresses may be logged by Tencent or Google when they use the Safari browser. However, doing this makes browsing sessions less secure and leaves users vulnerable to accessing fraudulent websites.
[…]
Even if people install a third-party browser on their iOS device, viewing web pages inside apps still opens them in an integrated form of Safari called Safari View Controller instead of the third-party browser. Tapping links inside apps also opens them in Safari rather than a third-party browser. These behaviors that force people back into Safari make it difficult for people to avoid the Safari browser completely when using an iPhone or iPad.
Citing sources familiar with the program, Bloomberg reported Thursday that “dozens” of workers for the e-commerce giant who are based in Romania and India are tasked with reviewing footage collected by Cloud Cams—Amazon’s app-controlled, Alexa-compatible indoor security devices—to help improve AI functionality and better determine potential threats. Bloomberg reported that at one point, these human workers were responsible for reviewing and annotating roughly 150 security snippets of up to 30 seconds in length each day that they worked.
Two sources who spoke with Bloomberg told the outlet that some clips depicted private imagery, such as what Bloomberg described as “rare instances of people having sex.” An Amazon spokesperson told Gizmodo that reviewed clips are submitted either through employee trials or customer feedback submissions for improving the service.
[…]
So to be clear, customers are sharing clips for troubleshooting purposes, but they aren’t necessarily aware of what happens with that clip after doing so.
More troubling, however, is an accusation from one source who spoke with Bloomberg that some of these human workers tasked with annotating the clips may be sharing them with members outside of their restricted teams, despite the fact that reviews happen in a restricted area that prohibits phones. When asked about this, a spokesperson told Gizmodo by email that Amazon’s rules “strictly prohibit employee access to or use of video clips submitted for troubleshooting, and have a zero tolerance policy for about of our systems.”
[…]
To be clear, it’s not just Amazon who’s been accused of allowing human workers to listen in on whatever is going on in your home. Motherboard has reported that both Xbox recordings and Skype calls are reviewed by human contractors. Apple, too, was accused of capturing sensitive recordings that contractors had access to. The fact is these systems just aren’t ready for primetime and need human intervention to function and improve—a fact that tech companies have successfully downplayed in favor of appearing to be magical wizards of innovation.
Twitter says it was just an accident that caused the microblogging giant to let advertisers use private information to better target their marketing materials at users.
The social networking giant on Tuesday admitted to an “error” that let advertisers have access to the private information customers had given Twitter in order to place additional security protections on their accounts.
“We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter said.
“When an advertiser uploaded their marketing list, we may have matched people on Twitter to their list based on the email or phone number the Twitter account holder provided for safety and security purposes. This was an error and we apologize.”
Twitter assures users that no “personal” information was shared, though we’re not sure what Twitter would consider “personal information” if your phone number and email address do not meet the bar.
The FBI routinely misused a database, gathered by the NSA with the specific purpose of searching for foreign intelligence threats, by searching it for everything from vetting to spying on relatives.
In doing so, it not only violated the law and the US constitution but knowingly lied to the faces of congressmen who were asking the intelligence services about this exact issue at government hearings, hearings that were intended to find if there needed to be additional safeguards added to the program.
That is the upshot of newly declassified rulings of the secret FISC court that decides issues of spying and surveillance within the United States.
On Tuesday, in a year-old ruling [PDF] that remains heavily redacted, everything that both privacy advocates and a number of congressmen – particularly Senator Ron Wyden (D-OR) – feared was true of the program turned out to be so, but worse.
Even though the program in question – Section 702 – is specifically designed only to be used for US government agencies to be allowed to search for evidence of foreign intelligence threats, the FBI gave itself carte blanche to search the same database for US citizens by stringing together a series of ridiculous legal justifications about data being captured “incidentally” and subsequent queries of that data not requiring a warrant because it had already been gathered.
Despite that situation, the FBI repeatedly assured lawmakers and the courts that it was using its powers in a very limited way. Senator Wyden was not convinced and used his position to ask questions about the program, the answers to which raised ever greater concerns.
For example, while the NSA was able to outline the process by which its staff was allowed to make searches on the database, including who was authorized to dig further, and it was able to give a precise figure for how many searches there had been, the FBI claimed it was literally not able to do so.
Free for all
Any FBI agent was allowed to search the database, it revealed under questioning, any FBI agent was allowed to de-anonymize the data and the FBI claimed it did not have a system to measure the number of search requests its agents carried out.
In a year-long standoff between Senator Wyden and the Director of National Intelligence, the government told Congress it was not able to get a number for the number of US citizens whose details had been brought up in searches – something that likely broke the Fourth Amendment.
Today’s release of the FISC secret opinion reveals that giving the FBI virtually unrestricted access to the database led to exactly the sort of behavior that people were concerned about: vast number of searches, including many that were not remotely justified.
For example, the DNI told Congress that in 2016, the NSA had carried out 30,355 searches on US persons within the database’s metadata and 2,280 searches on the database’s content. The CIA had carried out 2,352 search on content for US persons in the same 12-month period. The FBI said it had no way to measure it the number of searches it ran.
But that, it turns out, was a bold-faced lie. Because we now know that the FBI carried out 6,800 queries of the database in a single day in December 2017 using social security numbers. In other words, the FBI was using the NSA’s database at least 80 times more frequently than the NSA itself.
The FBI’s use of the database – which, again, is specifically defined in law as only being allowed to be used for foreign intelligence matters – was completely routine. And a result, agents started using it all the time for anything connected to their work, and sometimes their personal lives.
In the secret court opinion, now made public (but, again, still heavily redacted), the government was forced to concede that there were “fundamental misunderstandings” within the FBI staff over what criteria they needed to meet before carrying out a search.
Attorney General Bill Barr, along with officials from the United Kingdom and Australia, is set to publish an open letter to Facebook CEO Mark Zuckerberg asking the company to delay plans for end-to-end encryption across its messaging services until it can guarantee the added privacy does not reduce public safety.
A draft of the letter, dated Oct. 4, is set to be released alongside the announcement of a new data-sharing agreement between law enforcement in the US and the UK; it was obtained by BuzzFeed News ahead of its publication.
Signed by Barr, UK Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton, the letter raises concerns that Facebook’s plan to build end-to-end encryption into its messaging apps will prevent law enforcement agencies from finding illegal activity conducted through Facebook, including child sexual exploitation, terrorism, and election meddling.
The Trump administration is moving to start testing the DNA of people detained by U.S. immigration officers, according to reports of call on Wednesday between senior Department of Homeland Security (DHS) officials and reporters.
Justice Department officials are reportedly developing a new rule that would allow immigration officers to begin collecting the private genetic information of those being held in the more than 200 prison-like facilities spread across the U.S.
The New York Times reported that Homeland Security officials said the testing is part of a plan to root out “fraudulent family units.” Children and people applying for asylum at legal ports of entry may be tested under the proposed rule, which is likely to elicit strong concerns from privacy and immigration advocates in coming days.
The officials also said the DNA of U.S. citizens mistakenly booked in the facilities could be collected, according to the Times.
In a court case vs Planet 49 the EU has ruled that you can’t start collecting data just by showing a warning that you are doing so or by having a preselected tickbox stating it’s OK to collect data. The user has to actually go and click the tickbox or OK before any data collection is allowed.
the consent referred to in those provisions is not validly constituted if, in the form of cookies, the storage of information or access to information already stored in a website user’s terminal equipment is permitted by way of a pre-checked checkbox which the user must deselect to refuse his or her consent.
This is a good thing which fights off dark patterning – forcing users into things they don’t consent to or understand, of which there is more than enough of thank you very much.
Microsoft has annoyed some of its 900 million Windows 10 device users after apparently removing the ‘Use offline account’ as part of its effort to herd users towards its cloud-based Microsoft Account.
The offline local account is specific to one device, while the Microsoft Account can be used to log in to multiple devices and comes with the benefit of Microsoft’s recent work on passwordless authentication with Windows Hello.
The local account doesn’t require an internet connection or an email address – just a username and password that are stored on the PC
[…]
A user on a popular Reddit thread notes that the local account option is now invisible if the device is connected to the internet.
“Either run the setup without being connected to the internet, or type in a fake phone number a few times and it will give you the prompt to create a local account,” Froggyowns suggested as a solution.
So there is a way around the obstacle but as Reddit user Old_Traveller noted: “It’s such a dick move. I’ll never tie my main OS with an online account.”
[…]
as a user on Hacker News wrote, Microsoft has changed the name of the local account option to ‘Domain join instead’, which then allows admins to create an offline account.
Windows 10 users are accusing Microsoft of employing ‘dark-pattern’ techniques to usher them off local accounts, referring to tricks on websites that software makers use to choose an option that benefits the seller.
We initially identified apps for investigation based on how many users they had and how much data they could access. Now, we also identify apps based on signals associated with an app’s potential to abuse our policies. Where we have concerns, we conduct a more intensive examination. This includes a background investigation of the developer and a technical analysis of the app’s activity on the platform. Depending on the results, a range of actions could be taken from requiring developers to submit to in-depth questioning, to conducting inspections or banning an app from the platform.
Our App Developer Investigation is by no means finished. But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps. Of those, tens of thousands have been suspended for a variety of reasons while we continue to investigate.
It is important to understand that the apps that have been suspended are associated with about 400 developers. This is not necessarily an indication that these apps were posing a threat to people. Many were not live but were still in their testing phase when we suspended them. It is not unusual for developers to have multiple test apps that never get rolled out. And in many cases, the developers did not respond to our request for information so we suspended them, honoring our commitment to take action.
In a few cases, we have banned apps completely. That can happen for any number of reasons including inappropriately sharing data obtained from us, making data publicly available without protecting people’s identity or something else that was in clear violation of our policies. We have not confirmed other instances of misuse to date other than those we have already notified the public about, but our investigation is not yet complete. We have been in touch with regulators and policymakers on these issues. We’ll continue working with them as our investigation continues.
Whenever you sign up for a new app or service you probably are also agreeing to a new privacy policy. You know, that incredibly long block of text you scroll quickly by without reading?
Guard is a site that uses AI to read epically long privacy policies and then highlight any aspects of them that might be problematic.
Once it reads through a site or app’s privacy policy it gives the service a grade based on that policy as well as makes a recommendation on whether or not you should use it. It also brings in news stories about any scandals associated with a company and information about any security threats.
Twitter, for instance, has a D rating on the service. Guard recommends you avoid that app. The biggest threat? The company’s privacy policy says that it can sell or transfer your information.
For now, you’re limited to seeing ratings for only services Guard has decided to analyze, which includes most of the major apps out there like youTube, Reddit, Spotify, and Instagram. However, if you’re interested in a rating for a particular app you can submit it to the service and ask it to be done.
As the list of supported services grow, this could be even more of a solid resource in looking into what you’re using on your phone or computer and understanding how your data is being used.
Tesco has shuttered its parking validation web app after The Register uncovered tens of millions of unsecured ANPR images sitting in a Microsoft Azure blob.
The images consisted of photos of cars taken as they entered and left 19 Tesco car parks spread across Britain. Visible and highlighted were the cars’ numberplates, though drivers were not visible in the low-res images seen by The Register.
Used to power the supermarket’s outsourced parkshopreg.co.uk website, the Azure blob had no login or authentication controls. Tesco admitted to The Register that “tens of millions” of timestamped images were stored on it, adding that the images had been left exposed after a data migration exercise.
Ranger Services, which operated the Azure blob and the parkshopreg.co.uk web app, said it had nothing to add and did not answer any questions put to it by The Register. We understand that they are still investigating the extent of the breach. The firm recently merged with rival parking operator CP Plus and renamed itself GroupNexus.
[…]
The Tesco car parks affected by the breach include Braintree, Chelmsford, Chester, Epping, Fareham, Faversham, Gateshead, Hailsham, Hereford, Hove, Hull, Kidderminster, Woolwich, Rotherham, Sale (Cheshire), Slough, Stevenage, Truro, Walsall and Weston-super-Mare.
The web app compared the store-generated code with the ANPR images to decide whom to issue with parking charges. Ranger Services has pulled parkshopreg.co.uk offline, with its homepage now defaulting to a 403 error page.
[…]
A malicious person could use the data in the images to create graphs showing the most likely times for a vehicle of interest to be parked at one of the affected Tesco shops.
This was what Reg reader Ross was able to do after he realised just how insecure the database behind the parking validation app was.
Frequency of parking for three vehicles at Tesco in Faversham. Each colour represents one vehicle; the size of the circle shows how frequently they parked at the given time. Click to embiggen
A Tesco spokesman told The Register: “A technical issue with a parking app meant that for a short period historic images and times of cars entering and exiting our car parks were accessible. Whilst no images of people, nor any sensitive data were available, any security breach is unacceptable and we have now disabled the app as we work with our service provider to ensure it doesn’t happen again.”
We are told that during a planned data migration exercise to an AWS data lake, access to the Azure blob was opened to aid with the process. While it has been shut off, Tesco hasn’t told us how long it was left open for.
Tesco said that because it bought the car park monitoring services in from a third party, the third party was responsible for protecting the data in law. Ranger Services had not responded to The Register’s questions about whether it had informed the Information Commissioner’s Office by the time of writing.
[…]
As part of our investigation into the Tesco breach we also found exposed data in an unsecured AWS bucket belonging to car park operator NCP. The data was powering an online dashboard that could also be accessed without any login creds at all. A few tens of thousands of images were exposed in that bucket.
[…]
The unsecured NCP Vizuul dashboard
The dashboard, hosted at Vizuul.com, allowed the casual browser to pore through aggregated information drawn from ANPR cameras at an unidentified location. The information on display allowed one to view how many times a particular numberplate had infringed the car park rules, how many times it has been flagged in particular car parks, and how many penalty charge notices had been issued to it in the past.
The dashboard has since been pulled from public view.
The names of more than 120 companies secretly served FBI subpoenas for their customers’ personal data were revealed on Friday, including a slew of U.S. banks, cellphone providers, and a leading antivirus software maker.
Known as national security letters (NSL), the subpoenas are a tool commonly used by FBI counterterrorism agents when seeking individuals’ communication and financial histories. No judge oversees their use. Senior-most agents at any of the FBI’s 56 nationwide field offices can issue the letters, which are typically accompanied by a gag order.
The letters allow the FBI to demand access to limited types of information, most of which may be described as “metadata”—the names of email senders and recipients and the dates and times that messages were sent, for example. The actual content of messages is legally out of bounds. Financial information such as credit card transactions and travelers check purchases can also be obtained, in addition to the billing records and history of any given phone number.
Because NSL recipients are often forced to keep the fact secret for many years there’s been little transparency around who’s getting served.
But on Friday, the New York Times published four documents with details on 750 NSLs issued as far back as 2016. The paper described the documents—obtained by digital-rights group the Electronic Frontier Foundation (EFF) in a Freedom of Information Act lawsuit—as a “small but telling fraction” of the more than 500,000 letters issued since 2001, when passage of the Patriot Act greatly expanded the number of FBI officials who could sign them. Between 2000 and 2006, use of NSLs increased nearly six-fold, according to the Justice Department inspector general.
[…]
After passage of the USA Freedom Act in 2015, the FBI adopted guidelines that require gag orders to be reviewed for necessity three years after issuance or after an investigation is closed. Yet, privacy advocates accuse the FBI of failing to follow its own rules.
“The documents released by the FBI show that a wide range of services and providers receive NSLs and that the majority of them never tell their customers or the broader public, even after the government releases them from NSL gag orders,” said Aaron Mackey, a staff attorney at the EFF. “The records also show that the FBI is falling short of its obligations to release NSL recipients from gag orders that are no longer necessary.”
The FBI declined to comment.
The secrecy—not to mention the weak evidentiary standards—has kept NSLs squarely in cross hairs of civil liberties groups for years. But the FBI also carries a history of abuse, having in the past issued numerous letters “without proper authorization,” to quote the bureau’s own inspector general in 2009.
The same official would also describe to Congress a bevy of violations including “improper requests” and “unauthorized collections” of data that can’t be legally obtained with an NSL. In some cases, the justifications used by agents to obtain letters were found to be “perfunctory and conclusory,” or convenient and inherently flawed.
“It’s unconstitutional for the FBI to impose indefinite gags on the companies that receive NSLs,” said Neema Singh Guliani, senior legislative counsel with the American Civil Liberties Union. “This is one of the reasons that Congress previously sought to put an end to this practice, but it is now clear that the FBI is not following the law as intended.”
“As part of its surveillance reform efforts this year, Congress must strengthen existing laws designed to bar these types of gag orders,” she added.
The NSL records obtained by the EFF can be viewed here.
Cities in China are under the heaviest CCTV surveillance in the world, according to a new analysis by Comparitech. However, some residents living in cities across the US, UK, UAE, Australia, and India will also find themselves surrounded by a large number of watchful eyes, as our look at the number of public CCTV cameras in 120 cities worldwide found.
[…]
Depending on whom you ask, the increased prevalence and capabilities of CCTV surveillance could make society safer and more efficient, could trample on our rights to privacy and freedom of movement, or both. No matter which side you argue, the fact is that live video surveillance is ramping up worldwide.
Comparitech researchers collated a number of data resources and reports, including government reports, police websites, and news articles, to get some idea of the number of CCTV cameras in use in 120 major cities across the globe. We focused primarily on public CCTV—cameras used by government entities such as law enforcement.
Here are our key findings:
Eight out of the top 10 most-surveilled cities are in China
London and Atlanta were the only cities outside of China to make the top 10
By 2022, China is projected to have one public CCTV camera for every two people
We found little correlation between the number of public CCTV cameras and crime or safety
The 20 most-surveilled cities in the world
Based on the number of cameras per 1,000 people, these cities are the top 20 most surveilled in the world:
Chongqing, China – 2,579,890 cameras for 15,354,067 people = 168.03 cameras per 1,000 people
Shenzhen, China – 1,929,600 cameras for 12,128,721 people = 159.09 cameras per 1,000 people
Shanghai, China – 2,985,984 cameras for 26,317,104 people = 113.46 cameras per 1,000 people
Tianjin, China – 1,244,160 cameras for 13,396,402 people = 92.87 cameras per 1,000 people
Ji’nan, China – 540,463 cameras for 7,321,200 people = 73.82 cameras per 1,000 people
London, England (UK) – 627,707 cameras for 9,176,530 people = 68.40 cameras per 1,000 people
Wuhan, China – 500,000 cameras for 8,266,273 people = 60.49 cameras per 1,000 people
Guangzhou, China – 684,000 cameras for 12,967,862 people = 52.75 cameras per 1,000 people
Beijing, China – 800,000 cameras for 20,035,455 people = 39.93 cameras per 1,000 people
Atlanta, Georgia (US) – 7,800 cameras for 501,178 people = 15.56 cameras per 1,000 people
Singapore – 86,000 cameras for 5,638,676 people = 15.25 cameras per 1,000 people
Abu Dhabi, UAE – 20,000 cameras for 1,452,057 people = 13.77 cameras per 1,000 people
Chicago, Illinois (US) – 35,000 cameras for 2,679,044 people = 13.06 cameras per 1,000 people
Urumqi, China – 43,394 cameras for 3,500,000 people = 12.40 cameras per 1,000 people
Sydney, Australia – 60,000 cameras for 4,859,432 people = 12.35 cameras per 1,000 people
Baghdad, Iraq – 120,000 cameras for 9,760,000 people = 12.30 cameras per 1,000 people
Dubai, UAE – 35,000 cameras for 2,883,079 people = 12.14 cameras per 1,000 people
Moscow, Russia – 146,000 cameras for 12,476,171 people = 11.70 cameras per 1,000 people
Berlin, Germany – 39,765 cameras for 3,556,792 people = 11.18 cameras per 1,000 people
New Delhi, India – 179,000 cameras for 18,600,000 people = 9.62 cameras per 1,000 people
Smart-home devices, such as televisions and streaming boxes, are collecting reams of data — including sensitive information such as device locations — that is then being sent to third parties like advertisers and major tech companies, researchers said Tuesday.
As the findings show, even as privacy concerns have become a part of the discussion around consumer technology, new devices are adding to the hidden and often convoluted industry around data collection and monetization.
A team of researchers from Northeastern University and the Imperial College of London found that a variety of internet-connected devices collected and distributed data to outside companies, including smart TV and TV streaming devices from Roku and Amazon — even if a consumer did not interact with those companies.
“Nearly all TV devices in our testbeds contacts Netflix even though we never configured any TV with a Netflix account,” the Northeastern and Imperial College researchers wrote.
The researchers tested a total of 81 devices in the U.S. and U.K. in an effort to gain a broad idea of how much data is collected by smart-home devices, and where that data goes.
The researchers found data sent to a variety of companies, some known to consumers including Google, Facebook and Amazon, as well as companies that operate out of the public eye such as Mixpanel.com, a company that tracks users to help companies improve their products.
Spotify knows a lot about its users — their musical tastes, their most listened-to artists and their summer anthems. Spotify will also want to know where you live or to obtain your location data. It’s part of an effort to detect fraud and abuse of its Premium Family program.
Premium Family is a $15-a-month plan for up to six people. The only condition is that they all live at the same address. But the streaming music giant is concerned about people abusing that plan to pay as little as $2.50 for its services. So in August, the company updated its terms and conditions for Premium Family subscribers, requiring that they provide location data “from time to time” to ensure that customers are actually all in the same family.
You have 30 days to cancel after the new terms went into effect, which depends on where you are. The family plan terms rolled out first on Aug. 19 in Ireland and on Sept. 5 in the US.
The company tested this last year and asked for exact GPS coordinates but ended the pilot program after customers balked, according to TechCrunch. Now it intends on rolling the location data requests out fully, reigniting privacy concerns and raising the question of how much is too much when it comes to your personal information.
“The changes to the policy allow Spotify to arbitrarily use the location of an individual to ascertain if they continue to reside at the same address when using a family account, and it’s unclear how often Spotify will query users’ devices for this information,” said Christopher Weatherhead, technology lead for UK watchdog group Privacy International, adding that there are “worrying privacy implications.”