Apple Safari browser sends some user IP addresses to Chinese conglomerate Tencent by default

Apple admits that it sends some user IP addresses to Tencent in the “About Safari & Privacy” section of its Safari settings which can be accessed on an iOS device by opening the Settings app and then selecting “Safari > About Privacy & Security.” Under the title “Fraudulent Website Warning,” Apple says:

“Before visiting a website, Safari may send information calculated from the website address to Google Safe Browsing and Tencent Safe Browsing to check if the website is fraudulent. These safe browsing providers may also log your IP address.”

The “Fraudulent Website Warning” setting is toggled on by default which means that unless iPhone or iPad users dive two levels deep into their settings and toggle it off, their IP addresses may be logged by Tencent or Google when they use the Safari browser. However, doing this makes browsing sessions less secure and leaves users vulnerable to accessing fraudulent websites.

[…]

Even if people install a third-party browser on their iOS device, viewing web pages inside apps still opens them in an integrated form of Safari called Safari View Controller instead of the third-party browser. Tapping links inside apps also opens them in Safari rather than a third-party browser. These behaviors that force people back into Safari make it difficult for people to avoid the Safari browser completely when using an iPhone or iPad.

Source: Apple Safari browser sends some user IP addresses to Chinese conglomerate Tencent by default

Human Employees Are Viewing Clips from Amazon’s Home Surveillance Service

Citing sources familiar with the program, Bloomberg reported Thursday that “dozens” of workers for the e-commerce giant who are based in Romania and India are tasked with reviewing footage collected by Cloud Cams—Amazon’s app-controlled, Alexa-compatible indoor security devices—to help improve AI functionality and better determine potential threats. Bloomberg reported that at one point, these human workers were responsible for reviewing and annotating roughly 150 security snippets of up to 30 seconds in length each day that they worked.

Two sources who spoke with Bloomberg told the outlet that some clips depicted private imagery, such as what Bloomberg described as “rare instances of people having sex.” An Amazon spokesperson told Gizmodo that reviewed clips are submitted either through employee trials or customer feedback submissions for improving the service.

[…]

So to be clear, customers are sharing clips for troubleshooting purposes, but they aren’t necessarily aware of what happens with that clip after doing so.

More troubling, however, is an accusation from one source who spoke with Bloomberg that some of these human workers tasked with annotating the clips may be sharing them with members outside of their restricted teams, despite the fact that reviews happen in a restricted area that prohibits phones. When asked about this, a spokesperson told Gizmodo by email that Amazon’s rules “strictly prohibit employee access to or use of video clips submitted for troubleshooting, and have a zero tolerance policy for about of our systems.”

[…]

To be clear, it’s not just Amazon who’s been accused of allowing human workers to listen in on whatever is going on in your home. Motherboard has reported that both Xbox recordings and Skype calls are reviewed by human contractors. Apple, too, was accused of capturing sensitive recordings that contractors had access to. The fact is these systems just aren’t ready for primetime and need human intervention to function and improve—a fact that tech companies have successfully downplayed in favor of appearing to be magical wizards of innovation.

Source: Human Employees Are Viewing Clips from Amazon’s Home Surveillance Service

Twitter: No, really, we’re very sorry we sold your security info for a boatload of cash

Twitter says it was just an accident that caused the microblogging giant to let advertisers use private information to better target their marketing materials at users.

The social networking giant on Tuesday admitted to an “error” that let advertisers have access to the private information customers had given Twitter in order to place additional security protections on their accounts.

“We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter said.

“When an advertiser uploaded their marketing list, we may have matched people on Twitter to their list based on the email or phone number the Twitter account holder provided for safety and security purposes. This was an error and we apologize.”

Twitter assures users that no “personal” information was shared, though we’re not sure what Twitter would consider “personal information” if your phone number and email address do not meet the bar.

Source: Twitter: No, really, we’re very sorry we sold your security info for a boatload of cash • The Register

Remember the FBI’s promise it wasn’t abusing the NSA’s data on US citizens? Well, guess what… It was worse than the privacy advocates dreamt

The FBI routinely misused a database, gathered by the NSA with the specific purpose of searching for foreign intelligence threats, by searching it for everything from vetting to spying on relatives.

In doing so, it not only violated the law and the US constitution but knowingly lied to the faces of congressmen who were asking the intelligence services about this exact issue at government hearings, hearings that were intended to find if there needed to be additional safeguards added to the program.

That is the upshot of newly declassified rulings of the secret FISC court that decides issues of spying and surveillance within the United States.

On Tuesday, in a year-old ruling [PDF] that remains heavily redacted, everything that both privacy advocates and a number of congressmen – particularly Senator Ron Wyden (D-OR) – feared was true of the program turned out to be so, but worse.

Even though the program in question – Section 702 – is specifically designed only to be used for US government agencies to be allowed to search for evidence of foreign intelligence threats, the FBI gave itself carte blanche to search the same database for US citizens by stringing together a series of ridiculous legal justifications about data being captured “incidentally” and subsequent queries of that data not requiring a warrant because it had already been gathered.

Despite that situation, the FBI repeatedly assured lawmakers and the courts that it was using its powers in a very limited way. Senator Wyden was not convinced and used his position to ask questions about the program, the answers to which raised ever greater concerns.

For example, while the NSA was able to outline the process by which its staff was allowed to make searches on the database, including who was authorized to dig further, and it was able to give a precise figure for how many searches there had been, the FBI claimed it was literally not able to do so.

Free for all

Any FBI agent was allowed to search the database, it revealed under questioning, any FBI agent was allowed to de-anonymize the data and the FBI claimed it did not have a system to measure the number of search requests its agents carried out.

In a year-long standoff between Senator Wyden and the Director of National Intelligence, the government told Congress it was not able to get a number for the number of US citizens whose details had been brought up in searches – something that likely broke the Fourth Amendment.

Today’s release of the FISC secret opinion reveals that giving the FBI virtually unrestricted access to the database led to exactly the sort of behavior that people were concerned about: vast number of searches, including many that were not remotely justified.

For example, the DNI told Congress that in 2016, the NSA had carried out 30,355 searches on US persons within the database’s metadata and 2,280 searches on the database’s content. The CIA had carried out 2,352 search on content for US persons in the same 12-month period. The FBI said it had no way to measure it the number of searches it ran.

But that, it turns out, was a bold-faced lie. Because we now know that the FBI carried out 6,800 queries of the database in a single day in December 2017 using social security numbers. In other words, the FBI was using the NSA’s database at least 80 times more frequently than the NSA itself.

The FBI’s use of the database – which, again, is specifically defined in law as only being allowed to be used for foreign intelligence matters – was completely routine. And a result, agents started using it all the time for anything connected to their work, and sometimes their personal lives.

In the secret court opinion, now made public (but, again, still heavily redacted), the government was forced to concede that there were “fundamental misunderstandings” within the FBI staff over what criteria they needed to meet before carrying out a search.

Source: Remember the FBI’s promise it wasn’t abusing the NSA’s data on US citizens? Well, guess what… • The Register

Article continues on the site

US, UK and Australia want Zuckerberg To Halt Plans For End-To-End Encryption Across Facebook’s Apps – because they want to be able to spy on you. As will other criminals. What happened to the “Free world”?

Attorney General Bill Barr, along with officials from the United Kingdom and Australia, is set to publish an open letter to Facebook CEO Mark Zuckerberg asking the company to delay plans for end-to-end encryption across its messaging services until it can guarantee the added privacy does not reduce public safety.

A draft of the letter, dated Oct. 4, is set to be released alongside the announcement of a new data-sharing agreement between law enforcement in the US and the UK; it was obtained by BuzzFeed News ahead of its publication.

Signed by Barr, UK Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton, the letter raises concerns that Facebook’s plan to build end-to-end encryption into its messaging apps will prevent law enforcement agencies from finding illegal activity conducted through Facebook, including child sexual exploitation, terrorism, and election meddling.

Source: Attorney General Bill Barr Will Ask Zuckerberg To Halt Plans For End-To-End Encryption Across Facebook’s Apps

U.S. Plans to Test DNA of Immigrants in Detention Centers

The Trump administration is moving to start testing the DNA of people detained by U.S. immigration officers, according to reports of call on Wednesday between senior Department of Homeland Security (DHS) officials and reporters.

Justice Department officials are reportedly developing a new rule that would allow immigration officers to begin collecting the private genetic information of those being held in the more than 200 prison-like facilities spread across the U.S.

The New York Times reported that Homeland Security officials said the testing is part of a plan to root out “fraudulent family units.” Children and people applying for asylum at legal ports of entry may be tested under the proposed rule, which is likely to elicit strong concerns from privacy and immigration advocates in coming days.

The officials also said the DNA of U.S. citizens mistakenly booked in the facilities could be collected, according to the Times.

DHS did not respond to a request for comment.

Source: U.S. Plans to Test DNA of Immigrants in Detention Centers

EU court of justice rules opt in is not on if the tickbox is pre ticked

In a court case vs Planet 49 the EU has ruled that you can’t start collecting data just by showing a warning that you are doing so or by having a preselected tickbox stating it’s OK to collect data. The user has to actually go and click the tickbox or OK before any data collection is allowed.

the consent referred to in those provisions is not validly constituted if, in the form of cookies, the storage of information or access to information already stored in a website user’s terminal equipment is permitted by way of a pre-checked checkbox which the user must deselect to refuse his or her consent.

Source: CURIA – Documents

This is a good thing which fights off dark patterning – forcing users into things  they don’t consent to or understand, of which there is more than enough of thank you very much.

MS really really wants to know who is using Windows, make it very hard for Win 10 users to create local accounts.

Microsoft has annoyed some of its 900 million Windows 10 device users after apparently removing the ‘Use offline account’ as part of its effort to herd users towards its cloud-based Microsoft Account.

The offline local account is specific to one device, while the Microsoft Account can be used to log in to multiple devices and comes with the benefit of Microsoft’s recent work on passwordless authentication with Windows Hello.

The local account doesn’t require an internet connection or an email address – just a username and password that are stored on the PC

[…]

A user on a popular Reddit thread notes that the local account option is now invisible if the device is connected to the internet.

“Either run the setup without being connected to the internet, or type in a fake phone number a few times and it will give you the prompt to create a local account,” Froggyowns suggested as a solution.

So there is a way around the obstacle but as Reddit user Old_Traveller noted: “It’s such a dick move. I’ll never tie my main OS with an online account.”

[…]

as a user on Hacker News wrote, Microsoft has changed the name of the local account option to ‘Domain join instead’, which then allows admins to create an offline account.

Windows 10 users are accusing Microsoft of employing ‘dark-pattern’ techniques to usher them off local accounts, referring to tricks on websites that software makers use to choose an option that benefits the seller.

Source: Windows 10 users fume: Microsoft, where’s our ‘local account’ option gone? | ZDNet

My PC is at home. Microsoft, who sell the OS, have no right to know who I am or what I am doing with MY PC.

Facebook suspends apps belonging to 400 developers for slurping user data

We initially identified apps for investigation based on how many users they had and how much data they could access. Now, we also identify apps based on signals associated with an app’s potential to abuse our policies. Where we have concerns, we conduct a more intensive examination. This includes a background investigation of the developer and a technical analysis of the app’s activity on the platform. Depending on the results, a range of actions could be taken from requiring developers to submit to in-depth questioning, to conducting inspections or banning an app from the platform.

Our App Developer Investigation is by no means finished. But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps. Of those, tens of thousands have been suspended for a variety of reasons while we continue to investigate.

It is important to understand that the apps that have been suspended are associated with about 400 developers. This is not necessarily an indication that these apps were posing a threat to people. Many were not live but were still in their testing phase when we suspended them. It is not unusual for developers to have multiple test apps that never get rolled out. And in many cases, the developers did not respond to our request for information so we suspended them, honoring our commitment to take action.

In a few cases, we have banned apps completely. That can happen for any number of reasons including inappropriately sharing data obtained from us, making data publicly available without protecting people’s identity or something else that was in clear violation of our policies. We have not confirmed other instances of misuse to date other than those we have already notified the public about, but our investigation is not yet complete. We have been in touch with regulators and policymakers on these issues. We’ll continue working with them as our investigation continues.

Source: An Update on Our App Developer Investigation | Facebook Newsroom

Which basically means there were loads of and loads more apps harvesting data they shouldn’t have had access to.

This Site Uses AI to Find Issues in Privacy Policies

Whenever you sign up for a new app or service you probably are also agreeing to a new privacy policy. You know, that incredibly long block of text you scroll quickly by without reading?

Guard is a site that uses AI to read epically long privacy policies and then highlight any aspects of them that might be problematic.

Once it reads through a site or app’s privacy policy it gives the service a grade based on that policy as well as makes a recommendation on whether or not you should use it. It also brings in news stories about any scandals associated with a company and information about any security threats.

Twitter, for instance, has a D rating on the service. Guard recommends you avoid that app. The biggest threat? The company’s privacy policy says that it can sell or transfer your information.

For now, you’re limited to seeing ratings for only services Guard has decided to analyze, which includes most of the major apps out there like youTube, Reddit, Spotify, and Instagram. However, if you’re interested in a rating for a particular app you can submit it to the service and ask it to be done.

As the list of supported services grow, this could be even more of a solid resource in looking into what you’re using on your phone or computer and understanding how your data is being used.

Source: This Site Uses AI to Find Issues in Privacy Policies

When were you at Tesco? Let’s have a look. parking app hauled offline after exposing 10s of millions of Automatic Number Plate Recognition images by Ranger Services and NCP

Tesco has shuttered its parking validation web app after The Register uncovered tens of millions of unsecured ANPR images sitting in a Microsoft Azure blob.

The images consisted of photos of cars taken as they entered and left 19 Tesco car parks spread across Britain. Visible and highlighted were the cars’ numberplates, though drivers were not visible in the low-res images seen by The Register.

Used to power the supermarket’s outsourced parkshopreg.co.uk website, the Azure blob had no login or authentication controls. Tesco admitted to The Register that “tens of millions” of timestamped images were stored on it, adding that the images had been left exposed after a data migration exercise.

Ranger Services, which operated the Azure blob and the parkshopreg.co.uk web app, said it had nothing to add and did not answer any questions put to it by The Register. We understand that they are still investigating the extent of the breach. The firm recently merged with rival parking operator CP Plus and renamed itself GroupNexus.

[…]

The Tesco car parks affected by the breach include Braintree, Chelmsford, Chester, Epping, Fareham, Faversham, Gateshead, Hailsham, Hereford, Hove, Hull, Kidderminster, Woolwich, Rotherham, Sale (Cheshire), Slough, Stevenage, Truro, Walsall and Weston-super-Mare.

The web app compared the store-generated code with the ANPR images to decide whom to issue with parking charges. Ranger Services has pulled parkshopreg.co.uk offline, with its homepage now defaulting to a 403 error page.

[…]

A malicious person could use the data in the images to create graphs showing the most likely times for a vehicle of interest to be parked at one of the affected Tesco shops.

This was what Reg reader Ross was able to do after he realised just how insecure the database behind the parking validation app was.

Frequency of parking for 3 vehicles at Tesco in Faversham

Frequency of parking for three vehicles at Tesco in Faversham. Each colour represents one vehicle; the size of the circle shows how frequently they parked at the given time. Click to embiggen

A Tesco spokesman told The Register: “A technical issue with a parking app meant that for a short period historic images and times of cars entering and exiting our car parks were accessible. Whilst no images of people, nor any sensitive data were available, any security breach is unacceptable and we have now disabled the app as we work with our service provider to ensure it doesn’t happen again.”

We are told that during a planned data migration exercise to an AWS data lake, access to the Azure blob was opened to aid with the process. While it has been shut off, Tesco hasn’t told us how long it was left open for.

Tesco said that because it bought the car park monitoring services in from a third party, the third party was responsible for protecting the data in law. Ranger Services had not responded to The Register’s questions about whether it had informed the Information Commissioner’s Office by the time of writing.

[…]

As part of our investigation into the Tesco breach we also found exposed data in an unsecured AWS bucket belonging to car park operator NCP. The data was powering an online dashboard that could also be accessed without any login creds at all. A few tens of thousands of images were exposed in that bucket.

[…]

The unsecured NCP Vizuul dashboard

The unsecured NCP Vizuul dashboard

The dashboard, hosted at Vizuul.com, allowed the casual browser to pore through aggregated information drawn from ANPR cameras at an unidentified location. The information on display allowed one to view how many times a particular numberplate had infringed the car park rules, how many times it has been flagged in particular car parks, and how many penalty charge notices had been issued to it in the past.

The dashboard has since been pulled from public view.

Source: Tesco parking app hauled offline after exposing 10s of millions of Automatic Number Plate Recognition images • The Register

FBI Served Valve, Symantec, 120 companies with secret surveillance National Security Letters

The names of more than 120 companies secretly served FBI subpoenas for their customers’ personal data were revealed on Friday, including a slew of U.S. banks, cellphone providers, and a leading antivirus software maker.

Known as national security letters (NSL), the subpoenas are a tool commonly used by FBI counterterrorism agents when seeking individuals’ communication and financial histories. No judge oversees their use. Senior-most agents at any of the FBI’s 56 nationwide field offices can issue the letters, which are typically accompanied by a gag order.

The letters allow the FBI to demand access to limited types of information, most of which may be described as “metadata”—the names of email senders and recipients and the dates and times that messages were sent, for example. The actual content of messages is legally out of bounds. Financial information such as credit card transactions and travelers check purchases can also be obtained, in addition to the billing records and history of any given phone number.

Because NSL recipients are often forced to keep the fact secret for many years there’s been little transparency around who’s getting served.

But on Friday, the New York Times published four documents with details on 750 NSLs issued as far back as 2016. The paper described the documents—obtained by digital-rights group the Electronic Frontier Foundation (EFF) in a Freedom of Information Act lawsuit—as a “small but telling fraction” of the more than 500,000 letters issued since 2001, when passage of the Patriot Act greatly expanded the number of FBI officials who could sign them. Between 2000 and 2006, use of NSLs increased nearly six-fold, according to the Justice Department inspector general.

[…]

After passage of the USA Freedom Act in 2015, the FBI adopted guidelines that require gag orders to be reviewed for necessity three years after issuance or after an investigation is closed. Yet, privacy advocates accuse the FBI of failing to follow its own rules.

“The documents released by the FBI show that a wide range of services and providers receive NSLs and that the majority of them never tell their customers or the broader public, even after the government releases them from NSL gag orders,” said Aaron Mackey, a staff attorney at the EFF. “The records also show that the FBI is falling short of its obligations to release NSL recipients from gag orders that are no longer necessary.”

The FBI declined to comment.

The secrecy—not to mention the weak evidentiary standards—has kept NSLs squarely in cross hairs of civil liberties groups for years. But the FBI also carries a history of abuse, having in the past issued numerous letters “without proper authorization,” to quote the bureau’s own inspector general in 2009.

The same official would also describe to Congress a bevy of violations including “improper requests” and “unauthorized collections” of data that can’t be legally obtained with an NSL. In some cases, the justifications used by agents to obtain letters were found to be “perfunctory and conclusory,” or convenient and inherently flawed.

“It’s unconstitutional for the FBI to impose indefinite gags on the companies that receive NSLs,” said Neema Singh Guliani, senior legislative counsel with the American Civil Liberties Union. “This is one of the reasons that Congress previously sought to put an end to this practice, but it is now clear that the FBI is not following the law as intended.”

“As part of its surveillance reform efforts this year, Congress must strengthen existing laws designed to bar these types of gag orders,” she added.

The NSL records obtained by the EFF can be viewed here.

Source: FBI Served Valve, Symantec, More National Security Letters

The world’s most-surveilled cities – China, US, UK, UAE, Australia and India: you are being spied on!

Cities in China are under the heaviest CCTV surveillance in the world, according to a new analysis by Comparitech. However, some residents living in cities across the US, UK, UAE, Australia, and India will also find themselves surrounded by a large number of watchful eyes, as our look at the number of public CCTV cameras in 120 cities worldwide found.

[…]

Depending on whom you ask, the increased prevalence and capabilities of CCTV surveillance could make society safer and more efficient, could trample on our rights to privacy and freedom of movement, or both. No matter which side you argue, the fact is that live video surveillance is ramping up worldwide.

Comparitech researchers collated a number of data resources and reports, including government reports, police websites, and news articles, to get some idea of the number of CCTV cameras in use in 120 major cities across the globe. We focused primarily on public CCTV—cameras used by government entities such as law enforcement.

Here are our key findings:

  • Eight out of the top 10 most-surveilled cities are in China
  • London and Atlanta were the only cities outside of China to make the top 10
  • By 2022, China is projected to have one public CCTV camera for every two people
  • We found little correlation between the number of public CCTV cameras and crime or safety

The 20 most-surveilled cities in the world

Based on the number of cameras per 1,000 people, these cities are the top 20 most surveilled in the world:

  1. Chongqing, China – 2,579,890 cameras for 15,354,067 people = 168.03 cameras per 1,000 people
  2. Shenzhen, China – 1,929,600 cameras for 12,128,721 people = 159.09 cameras per 1,000 people
  3. Shanghai, China – 2,985,984 cameras for 26,317,104 people = 113.46 cameras per 1,000 people
  4. Tianjin, China – 1,244,160 cameras for 13,396,402 people = 92.87 cameras per 1,000 people
  5. Ji’nan, China – 540,463 cameras for 7,321,200 people = 73.82 cameras per 1,000 people
  6. London, England (UK) – 627,707 cameras for 9,176,530 people = 68.40 cameras per 1,000 people
  7. Wuhan, China – 500,000 cameras for 8,266,273 people = 60.49 cameras per 1,000 people
  8. Guangzhou, China – 684,000 cameras for 12,967,862 people = 52.75 cameras per 1,000 people
  9. Beijing, China – 800,000 cameras for 20,035,455 people = 39.93 cameras per 1,000 people
  10. Atlanta, Georgia (US) – 7,800 cameras for 501,178 people = 15.56 cameras per 1,000 people
  11. Singapore – 86,000 cameras for 5,638,676 people = 15.25 cameras per 1,000 people
  12. Abu Dhabi, UAE – 20,000 cameras for 1,452,057 people = 13.77 cameras per 1,000 people
  13. Chicago, Illinois (US) – 35,000 cameras for 2,679,044 people = 13.06 cameras per 1,000 people
  14. Urumqi, China – 43,394 cameras for 3,500,000 people = 12.40 cameras per 1,000 people
  15. Sydney, Australia – 60,000 cameras for 4,859,432 people = 12.35 cameras per 1,000 people
  16. Baghdad, Iraq – 120,000 cameras for 9,760,000 people = 12.30 cameras per 1,000 people
  17. Dubai, UAE – 35,000 cameras for 2,883,079 people = 12.14 cameras per 1,000 people
  18. Moscow, Russia – 146,000 cameras for 12,476,171 people = 11.70 cameras per 1,000 people
  19. Berlin, Germany – 39,765 cameras for 3,556,792 people = 11.18 cameras per 1,000 people
  20. New Delhi, India – 179,000 cameras for 18,600,000 people = 9.62 cameras per 1,000 people

Source: The world’s most-surveilled cities – Comparitech

Smart TVs, smart-home devices found to be leaking sensitive user data to all kinds of companies

Smart-home devices, such as televisions and streaming boxes, are collecting reams of data — including sensitive information such as device locations — that is then being sent to third parties like advertisers and major tech companies, researchers said Tuesday.

As the findings show, even as privacy concerns have become a part of the discussion around consumer technology, new devices are adding to the hidden and often convoluted industry around data collection and monetization.

A team of researchers from Northeastern University and the Imperial College of London found that a variety of internet-connected devices collected and distributed data to outside companies, including smart TV and TV streaming devices from Roku and Amazon — even if a consumer did not interact with those companies.

“Nearly all TV devices in our testbeds contacts Netflix even though we never configured any TV with a Netflix account,” the Northeastern and Imperial College researchers wrote.

The researchers tested a total of 81 devices in the U.S. and U.K. in an effort to gain a broad idea of how much data is collected by smart-home devices, and where that data goes.

The research was first reported by The Financial Times.

The researchers found data sent to a variety of companies, some known to consumers including Google, Facebook and Amazon, as well as companies that operate out of the public eye such as Mixpanel.com, a company that tracks users to help companies improve their products.

Source: Smart TVs, smart-home devices found to be leaking sensitive user data, researchers find

Spotify wants to know where you are and will be checking in

Spotify knows a lot about its users — their musical tastes, their most listened-to artists and their summer anthems. Spotify will also want to know where you live or to obtain your location data. It’s part of an effort to detect fraud and abuse of its Premium Family program.

Premium Family is a $15-a-month plan for up to six people. The only condition is that they all live at the same address. But the streaming music giant is concerned about people abusing that plan to pay as little as $2.50 for its services. So in August, the company updated its terms and conditions for Premium Family subscribers, requiring that they provide location data “from time to time” to ensure that customers are actually all in the same family.

You have 30 days to cancel after the new terms went into effect, which depends on where you are. The family plan terms rolled out first on Aug. 19 in Ireland and on Sept. 5 in the US.

The company tested this last year and asked for exact GPS coordinates but ended the pilot program after customers balked, according to TechCrunch. Now it intends on rolling the location data requests out fully, reigniting privacy concerns and raising the question of how much is too much when it comes to your personal information.

“The changes to the policy allow Spotify to arbitrarily use the location of an individual to ascertain if they continue to reside at the same address when using a family account, and it’s unclear how often Spotify will query users’ devices for this information,” said Christopher Weatherhead, technology lead for UK watchdog group Privacy International, adding that there are “worrying privacy implications.”

Source: Spotify wants to know where you live and will be checking in – CNET

Millions of Americans’ medical images and data are available on the Internet

Medical images and health data belonging to millions of Americans, including X-rays, MRIs, and CT scans, are sitting unprotected on the Internet and available to anyone with basic computer expertise.

The records cover more than 5 million patients in the United States and millions more around the world. In some cases, a snoop could use free software programs—or just a typical Web browser—to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers—computers that are used to store and retrieve medical data—in the US that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers, and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

“It’s not even hacking. It’s walking into an open door,” said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of US company MobilexUSA displayed the names of more than a million patients—all by typing in a simple data query. Their dates of birth, doctors, and procedures were also included.

[…]

All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates, and, in some cases, Social Security numbers.

[…]

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customers’ computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the Internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. “Suddenly, medical security has become a do-it-yourself project,” Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the United States.

Source: Millions of Americans’ medical images and data are available on the Internet | Ars Technica

Period Tracker Apps: Maya And MIA Fem Are telling Facebook when you last had sex and more

Period tracker apps are sending deeply personal information about women’s health and sexual practices to Facebook, new research has found.

UK-based advocacy group Privacy International, sharing its findings exclusively with BuzzFeed News, discovered period-tracking apps including MIA Fem and Maya sent women’s use of contraception, the timings of their monthly periods, symptoms like swelling and cramps, and more, directly to Facebook.

Women use such apps for a range of purposes, from tracking their period cycles to maximizing their chances of conceiving a child. On the Google Play store, Maya, owned by India-based Plackal Tech, has more than 5 million downloads. Period Tracker MIA Fem: Ovulation Calculator, owned by Cyprus-based Mobapp Development Limited, says it has more than 2 million users around the world. They are also available on the App Store.

The data sharing with Facebook happens via Facebook’s Software Development Kit (SDK), which helps app developers incorporate particular features and collect user data so Facebook can show them targeted ads, among other functions. When a user puts personal information into an app, that information may also be sent by the SDK to Facebook.

Asked about the report, Facebook told BuzzFeed News it had gotten in touch with the apps Privacy International identified to discuss possible violations of its terms of service, including sending prohibited types of sensitive information.

Maya informs Facebook whenever you open the app and starts sharing some data with Facebook even before the user agrees to the app’s privacy policy, Privacy International found.

“When Maya asks you to enter how you feel and offers suggestions of symptoms you might have — suggestions like blood pressure, swelling or acne — one would hope this data would be treated with extra care,” the report said. “But no, that information is shared with Facebook.”

The app also shares data users enter about their use of contraception, the analysis found, as well as their moods. It also asks users to enter information about when they’ve had sex and what kind of contraception they used, and also includes a diarylike section for users to write their own notes. That information is also shared with Facebook.

Source: Period Tracker Apps: Maya And MIA Fem Are Sharing Deeply Personal Data With Facebook

UK Government Plans to Collect ‘Targeted and Personalized’ Data on Internet Users to Prepare For Brexit: Report

The UK government is planning to collect “targeted and personalized information,” on anyone who visits the government’s various websites, according to a new report from BuzzFeed News. Politicians in the UK are being told that it’s a “top priority” and that the information is needed to prepare for Brexit, the UK’s departure from the European Union, which is still scheduled for October 31.

BuzzFeed obtained two top secret government directives from August directed at members of Prime Minister Boris Johnson’s cabinet about an “accelerated implementation plan” for tracking “digital identity.” A UK government spokesperson in contact with Buzzfeed denied that it was collecting personal data and insisted that “all activity is fully compliant with our legal and ethical obligations.”

The government’s main web portal, Gov.UK, is used for a wide range of online services from health care to passports to taxes, and also includes services that would typically be handled by individual states in the U.S., including renewing your driver’s license. Thus, any attempt to politicize the kind of information collected is highly controversial in the UK.

From BuzzFeed:

At present, usage of GOV.UK is tracked by individual departments, not collected centrally. According to the documents seen by BuzzFeed News, the Cabinet Office’s digital unit, the government digital service (GDS), will add an additional layer of tracking that “will enable GDS to have data for the entire journey of a user as they land on GOV.UK from a Google advert or an email link, read content on GOV.UK, click on a link taking them from GOV.UK to a service and then onwards through the service journey to completion”.

One of the memos was from Prime Minister Johnson himself telling staff that the information would “support key decision making” for Brexit, though it’s not clear what that means in practice.

British citizens are rightly skeptical of any massive digital data collection programs, especially as we learn more about how Big Data was used to manipulate the British people before the public referendum in 2016 on whether or not to leave the EU. The campaigners who wanted people to vote “Leave” used the disgraced political data firm Cambridge Analytica, best known in the U.S. for misusing Facebook data in an effort to get Donald Trump elected.

The UK is currently in the middle of a self-imposed crisis as the deadline for Brexit is less than two months away. And while no one knows for sure what Boris Johnson and his government will do with a new centralized data collection plan, you can see why people would think that’s a bad idea.

But much like President Trump’s attitude in the U.S., it may not matter what the people think—Johnson suspended parliament last night, sending politicians home until October 14, and he’s going to do whatever he feels he needs to do to make Brexit happen.

Source: UK Government Plans to Collect ‘Targeted and Personalized’ Data on Internet Users to Prepare For Brexit: Report

Facebook: Remember how we promised we weren’t tracking your location? Psych! Can’t believe you fell for that

For years the antisocial media giant has claimed it doesn’t track your location, insisting to suspicious reporters and privacy advocates that its addicts “have full control over their data,” and that it does not gather or sell that data unless those users agree to it.

No one believed it. So, when it (and Google) were hit with lawsuits trying to get to the bottom of the issue, Facebook followed its well-worn path to avoiding scrutiny: it changed its settings and pushed out carefully worded explanations that sounded an awful lot like it wasn’t tracking you anymore. But it was. Because location data is valuable.

Then, late on Monday, Facebook emitted a blog post in which it kindly offered to help users “understand updates” to their “device’s location settings.”

It begins: “Facebook is better with location. It powers features like check-ins and makes planning events easier. It helps improve ads and keep you and the Facebook community safe. Features like Find Wi-Fi and Nearby Friends use precise location even when you’re not using the app to make sure that alerts and tools are accurate and personalized for you.”

You may have missed the critical part amid the glowing testimony so we’ll repeat it: “… use precise location even when you’re not using the app…”

Huh, fancy that. It sounds an awful lot like tracking. After all, why would you want Facebook to know your precise location at all times, even when you’re not using its app? And didn’t Facebook promise it wasn’t doing that?

Timing

Well, yes it did, and it was being economical with the truth. But perhaps the bigger question is: why now? Why has Facebook decided to come clean all of a sudden? Is it because of the newly announced antitrust and privacy investigations into tech giants? Well, yes, in a roundabout way.

Surprisingly, in a moment of almost honesty which must have felt quite strange for Facebook’s execs, the web giant actually explains why it has stopped pretending it doesn’t track users: because soon it won’t be able to keep up the pretense.

“Android and iOS have released new versions of their operating systems, which include updates to how you can view and manage your location,” the blog post reveals.

That’s right, under pressure from lawmakers and users, both Google and Apple have added new privacy features to their upcoming mobile operating systems – Android and iOS – that will make it impossible for Facebook to hide its tracking activity.

Source: Facebook: Remember how we promised we weren’t tracking your location? Psych! Can’t believe you fell for that • The Register

The Windows 10 Privacy Settings You Should Check Right Now

If you’re at all concerned about the privacy of your data, you don’t want to leave the default settings in place on your devices—and that includes anything that runs Windows 10.

Microsoft’s operating system comes with a variety of controls and options you can modify to lock down the use of your data, from the information you share with Microsoft to the access that individual apps have to your location, camera, and microphone. Check these privacy-related settings as soon as you’ve got your Windows 10 computer set up—or now, in case you’re a longtime user who hasn’t gotten around to it yet.

Source: The Windows 10 Privacy Settings You Should Check Right Now | WIRED

Cops did hand over photos for King’s Cross facial-recog CCTV to 3rd parties after all – a property developer, between 2016-2018

London cops have admitted they gave photos of people to a property developer to use in a facial-recognition system in the heart of the UK capital.

Back in July, Siân Berry, co-leader of the Green Party of England and Wales, asked London Mayor Sadiq Khan whether the Met Police had collaborated with any retailers or other private companies in the operation of facial-recognition systems. A month later, Khan replied that the police force had not worked with any organisations on face-scanning tech in the capital beyond its own experiments.

However, that turned out to be incorrect. On Wednesday this week, the mayor revealed the cops had in actual fact handed over snaps of people to the private landlord for most of the busy King’s Cross area – which, it emerged last month, had set up facial-recognition cameras to snoop on thousands of Brits going about their day.

“The MPS [Metropolitan Police Service] has just now brought it to my attention that the original information they provided … was incorrect and they have in fact shared images related to facial recognition with King’s Cross Central Limited Partnership,” Khan said in an update, adding that this handover of photos ended sometime in 2018.

Source: Oops, wait, yeah, we did hand over photos for King’s Cross facial-recog CCTV, cops admit • The Register

Google has secret webpages that feed your personal data to advertisers, report to EU says

New evidence submitted for an investigation into Google’s collection of personal data in the European Union reportedly accuses the search giant of stealthy sending your personal user data to advertisers. The company allegedly relays this information to advertisers using hidden webpages, allowing it to circumvent EU privacy regulations.

The evidence was submitted to Ireland’s Data Protection Commission, the main watchdog over the company in the European Union, by Johnny Ryan, chief policy officer for privacy-focused browser maker Brave, according to a Financial Times report Wednesday. Ryan reportedly said he discovered that Google used a tracker containing web browsing information, location and other data and sent it to ad companies via webpages that “showed no content,” according to FT. This could allow companies buying ads to match a user’s Google profile and web activity to profiles from other companies, which is against Google’s own ad buying rules, according to the FT.

In response, Google said Wednesday it doesn’t serve “personalized ads or send bid requests to bidders without user consent.”

The process laid out by Ryan could potentially be “cookie matching” or “cookie syncing,” an ad industry practice of matching ads across multiple sites based on a user’s browsing history. A Google developer page on cookie matching explains the process and the privacy principles the search engine follows, such as not allowing the info to be harvested by multiple companies.

The Data Protection Commission began an investigation into Google’s practices in May after it received a complaint from Brave that Google was allegedly violating the EU’s General Data Protection Regulation.

Source: Google has secret webpages that feed your personal data to advertisers, report says – CNET

Online Depression Tests Are Collecting and Sharing Your Data

This week, Privacy International published a report—Your mental health for sale—which explored how mental health websites handle user data. The digital rights nonprofit looked at 136 mental health webpages across Google France, Google Germany and the UK version of Google, according to the report. They chose websites based on advertised links and featured page search results for depression-related terms in French, German, and English, and also included the most visited sites according to web analytics service SimilarWeb.

According to the report, the organization used the open-source software webxray to identify third-party HTTP requests and cookies. It then analyzed the websites on July 8th of this year. The analysis found that 97.78 percent of the webpages had a third-party element, which might include cookies, JavaScript, or an image hosted on an outside server. And Privacy International also pointed out that its research found that the main reason for these third-party elements was for advertising.

Webxray’s analysis found that 76.04 percent of the webpages had trackers for marketing purposes—80.49 percent of the pages in France, 61.36 percent of the pages in Germany, and 86.27 percent of them in the UK. Among the third-party trackers also included the likes of advertising services from Google, Facebook, and Amazon, with Google trackers being the most present, followed by Facebook and Amazon.

A deeper dive into a subset of these websites—the first three Google search results for “depression test” in the three countries—also indicated some more specific and egregious ways in which these trackers are shilling some of our most intimate data. For instance, among the findings from that additional analysis, Privacy International found that some of the depression test websites stored user’s responses and shared them along with their test results with third parties. They also found that two depression test websites use Hotjar, an online feedback tool that can record what someone types and clicks on a webpage. It’s not difficult to imagine how such data—responses to a depression test—can be exploited.

Source: Online Depression Tests Are Collecting and Sharing Your Data

Mozilla says Firefox won’t defang ad blockers – unlike Google Chrome, which is steadily removing your privacy from 3rd parties

On Tuesday, Mozilla said it is not planning to change the ad-and-content blocking capabilities of Firefox to match what Google is doing in Chrome.

Google’s plan to revise its browser extension APIs, known as Manifest v3, follows from the web giant’s recognition that many of its products and services can be abused by unscrupulous developers. The search king refers to its product security and privacy audit as Project Strobe, “a root-and-branch review of third-party developer access to your Google account and Android device data.”

In a Chrome extension, the manifest file (manifest.json) tells the browser which files and capabilities (APIs) will be used. Manifest v3, proposed last year and still being hammered out, will alter and limit the capabilities available to extensions.

Developers who created extensions under Manifest v2 may have to revise their code to keep it working with future versions of Chrome. That may not be practical or possible in all cases, though. The developer of uBlock Origin, Raymond Hill, has said his web-ad-and-content-blocking extension will break under Manifest v3. It’s not yet clear whether uBlock Origin can or will be adapted to the revised API.

The most significant change under Manifest v3 is the deprecation of the blocking webRequest API (except for enterprise users), which lets extensions intercept incoming and outgoing browser data, so that the traffic can be modified, redirected or blocked.

Firefox not following

“In its place, Google has proposed an API called declarativeNetRequest,” explains Caitlin Neiman, community manager for Mozilla Add-ons (extensions), in a blog post.

“This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.”

Mozilla offers Firefox developers the Web Extensions API, which is mostly compatible with the Chrome extensions platform and is supported by Chromium-based browsers Brave, Opera and Vivaldi. Those other three browser makers have said they intend to work around Google’s changes to the blocking webRequest API. Now, Mozilla says as much.

“We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” said Neiman.

[…]

Google maintains, “We are not preventing the development of ad blockers or stopping users from blocking ads,” even as it acknowledges “these changes will require developers to update the way in which their extensions operate.”

Yet Google’s related web technology proposal two weeks ago to build a “privacy sandbox,” through a series of new technical specifications that would hinder anti-tracking mechanisms, has been dismissed as disingenuous “privacy gaslighting.”

On Friday, EFF staff technologist Bennett Cyphers, lambasted the ad biz for its self-serving specs. “Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies – by far the most common tracking technology on the Web, and Google’s tracking method of choice – will hurt user privacy,” he wrote in a blog post.

Source: Mozilla says Firefox won’t defang ad blockers – unlike a certain ad-giant browser • The Register

PowerShell 7 ups the telemetry but… hey… is that an off switch?

Microsoft emitted a fresh preview of command-line darling PowerShell 7 last night, highlighting some additional slurping – and how to shut it off.

PowerShell 7 Preview 3, which is built on .NET Core 3.0 Preview 8, is the latest step on the way to final release at the end of 2019 and a potential replacement for the venerable PowerShell 5.1.

The first preview dropped back in May and the gang has made solid progress since. This time around, the team has opted to switch on all experimental features of the command-line shell by default in order to get more feedback on whether those features are worth the extra effort to gain “stable” status.

[…]

there are a number of useful features, some targeted squarely at Windows (stripping away reasons to stay with PowerShell 7’s more Windows-focused ancestors) and others that simply make life easy for script fans. The ability to stick a -Parallel parameter to ForEach-Object in order to execute scriptblocks in parallel is a good example, as is a -ThrottleLimit parameter to keep the thread usage under control.

Preview 3 and Telemetry

However, it’s not all good news. Lee, with impressive openness, highlighted the extra telemetry PowerShell would be capturing with this release. Microsoft’s Sydney Smith provided further details and, perhaps more importantly for some users, explained how to turn the slurping off.

New data points being collected include counts of application types such as Cmdlets and Functions, hosted sessions and PowerShell starts by type (API vs Console).

[…]

for the benefit of those who get twitchy about the slurping of data, Smith highlighted the POWERSHELL_TELEMETRY_OPTOUT environment variable, which can be set to the true, yes or 1 to stop PowerShell squirting anything back at Redmond’s servers.

Source: Latest sneak peek at PowerShell 7 ups the telemetry but… hey… is that an off switch? • The Register