Radio gaga: Techies fear EU directive to stop RF device tinkering will do more harm than good

EU plans to ban the sale of user-moddable radio frequency devices – like phones and routers – have provoked widespread condemnation from across the political bloc.

The controversy centres on Article 3(3)(i) of the EU Radio Equipment Directive, which was passed into law back in 2014.

However, an EU working group is now about to define precisely which devices will be subject to the directive – and academics, researchers, individual “makers” and software companies are worried that their activities and business models will be outlawed.

Article 3(3)(i) states that RF gear sold in the EU must support “certain features in order to ensure that software can only be loaded into the radio equipment where the compliance of the combination of the radio equipment and software has been demonstrated”.

If the law is implemented in its most potentially harmful form, no third-party firmware could be installed onto something like a home router, for example.

Hauke Mehrtens of the Free Software Foundation Europe (FSFE) told The Register: “If the EU forces Wi-Fi router manufacturers to prevent their customers from installing their own software onto their devices this will cause great harm to the OpenWrt project, wireless community networks, innovative startups, computer network researchers and European citizens. This would increase the electronic waste, make it impossible for the user to fix security vulnerabilities by himself or the help of the community and block research which could improve the internet in the EU.”

Source: Radio gaga: Techies fear EU directive to stop RF device tinkering will do more harm than good • The Register

Oh dear, does this not mean you don’t really own the stuff you buy?

Leaked Documents Show the U.S. Government Tracking Journalists and Immigration Advocates Through a Secret Database, having them detained at borders

One photojournalist said she was pulled into secondary inspections three times and asked questions about who she saw and photographed in Tijuana shelters. Another photojournalist said she spent 13 hours detained by Mexican authorities when she tried to cross the border into Mexico City. Eventually, she was denied entry into Mexico and sent back to the U.S.

These American photojournalists and attorneys said they suspected the U.S. government was monitoring them closely but until now, they couldn’t prove it.

Now, documents leaked to NBC 7 Investigates show their fears weren’t baseless. In fact, their own government had listed their names in a secret database of targets, where agents collected information on them. Some had alerts placed on their passports, keeping at least three photojournalists and an attorney from entering Mexico to work.

The documents were provided to NBC 7 by a Homeland Security source on the condition of anonymity, given the sensitive nature of what they were divulging.

The source said the documents or screenshots show a SharePoint application that was used by agents from Customs and Border Protection (CBP) Immigration and Customs Enforcement (ICE), the U.S. Border Patrol, Homeland Security Investigations and some agents from the San Diego sector of the Federal Bureau of Investigations (FBI).

The intelligence gathering efforts were done under the umbrella of “Operation Secure Line,” the operation designated to monitor the migrant caravan, according to the source.

The documents list people who officials think should be targeted for screening at the border.

The individuals listed include ten journalists, seven of whom are U.S. citizens, a U.S. attorney, and 47 people from the U.S. and other countries, labeled as organizers, instigators or their roles “unknown.” The target list includes advocates from organizations like Border Angels and Pueblo Sin Fronteras.

To view the documents, click here or the link below.

PHOTOS: Leaked Documents Show Government Tracking Journalists, Immigration AdvocatesPHOTOS: Leaked Documents Show Government Tracking Journalists, Immigration Advocates

NBC 7 Investigates is blurring the names and photos of individuals who haven’t given us permission to publish their information.

[…]

In addition to flagging the individuals for secondary screenings, the Homeland Security source told NBC 7 that the agents also created dossiers on each person listed.

“We are a criminal investigation agency, we’re not an intelligence agency,” the Homeland Security source told NBC 7 Investigates. “We can’t create dossiers on people and they’re creating dossiers. This is an abuse of the Border Search Authority.”

One dossier, shared with NBC 7, was on Nicole Ramos, the Refugee Director and attorney for Al Otro Lado, a law center for migrants and refugees in Tijuana, Mexico. The dossier included personal details on Ramos, including specific details about the car she drives, her mother’s name, and her work and travel history.

After sharing the documents with Ramos, she said Al Otro Lado is seeking more information on why she and other attorneys at the law center have been targeted by border officials.

“The document appears to prove what we have assumed for some time, which is that we are on a law enforcement list designed to retaliate against human rights defenders who work with asylum seekers and who are critical of CBP practices that violate the rights of asylum seekers,” Ramos told NBC 7 by email.

In addition to the dossier on Ramos, a list of other dossier files created was shared with NBC 7. Two of the dossier files were labeled with the names of journalists but no further details were available. Those journalists were also listed as targets for secondary screenings.

Customs and Border Protection has the authority to pull anyone into secondary screenings, but the documents show the agency is increasingly targeting journalists, attorneys, and immigration advocates. Former counterterrorism officials say the agency should not be targeting individuals based on their profession.

Source: Source: Leaked Documents Show the U.S. Government Tracking Journalists and Immigration Advocates Through a Secret Database – NBC 7 San Diego

When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security

This time, the Silicon Valley giant has been caught red-handed using people’s cellphone numbers, provided exclusively for two-factor authentication, for targeted advertising and search – after it previously insinuated it wouldn’t do that.

Folks handing over their mobile numbers to protect their accounts from takeovers and hijackings thought the contact detail would be used for just that: security. Instead, Facebook is using the numbers to link netizens to other people, and target them with online ads.

For example, if someone you know – let’s call her Sarah – has given her number to Facebook for two-factor authentication purposes, and you allow the Facebook app to access your smartphone’s contacts book, and it sees Sarah’s number in there, it will offer to connect you two up, even though Sarah thought her number was being used for security only, and not for search. This is not a particularly healthy scenario, for instance, if you and Sarah are no longer, or never were, friends in real life, and yet Facebook wants to wire you up anyway.

Following online outcry over the weekend, a Facebook spokesperson told us today: “We appreciate the feedback we’ve received about these settings, and will take it into account.”

Source: When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security • The Register

Anyone surprised much?

Apples’ Shazam for iOS Sheds 3rd Party SDKs. Keeps pumping your data through on Android.

Shazam, the song identification app Apple bought for $400M, recently released an update to its iOS app that got rid of all 3rd party SDKs the app was using except for one.

The SDKs that were removed include ad networks, analytics trackers, and even open-source utilities. Why, you ask? Because all of those SDKs leak usage data to 3rd parties one way or another, something Apple really really dislikes.

Here are all the SDKs that were uninstalled in the latest update:

AdMob
Bolts
DoubleClick
FB Ads
FB Analytics
FB Login
InMobi
IAS
Moat
MoPub

Right now, the app only has one 3rd party SDK installed and that’s HockeyApp. Microsoft’s version of TestFlight. It’s unclear why it’s still there, but we don’t expect it to stick around for too long.

Looking across Apple’s entire app portfolio it’s very uncommon to see 3rd party SDKs at all. Exceptions exist. One such example is Apple’s Support app which has the Adobe Analytics SDK installed.

Things Are Different on Android

Since Shazam is also available for Android we expected to see the same behavior. A mass uninstall of 3rd party SDKs. At first glance it seems to be the case, but not exactly.

Here are all the SDKs that were uninstalled in the last update:

AdColony
AdMob
Amazon Ads
Ads
FB Analytics
Gimbal
Google IMA
MoPub

Here are all the SDKs that are still installed in Shazam for Android:

Bolts
FB Analytics
Butter Knife
Crashlytics
Fabric
Firebase
Google Maps
OKHttp
Otto

On Android, Apple seems to be ok with leaking usage data to both Facebook through the Facebook Login SDK and Google through Fabric and Google Maps, indicating Apple hasn’t built out its internal set of tools for Android.

It’s also worth noting that HockeyApp was removed from Shazam from Android more than a year ago.

Want to see which SDKs apps have installed? Check out Explorer, the most comprehensive SDK Intelligence platform for iOS and Android apps.

Source: Shazam for iOS Sheds 3rd Party SDKs | App store Insights from Appfigures

Facebook receives personal health data from apps, even if you don’t have a FB account

Facebook receives highly personal information from apps that track your health and help you find a new home, testing by The Wall Street Journal found. Facebook can receive this data from certain apps even if the user does not have a Facebook account, according to the Journal.

Facebook has already been in hot water concerning issues of consent and user data.

Most recently, a TechCrunch report revealed in January that Facebook paid users as young as teenagers to install an app that would allow the company to collect all phone and web activity. Following the report, Apple revoked some developer privileges from Facebook, saying Facebook violated its terms by distributing the app through a program meant only for employees to test apps prior to release.

The new report said Facebook is able to receive data from a variety of apps. Of more than 70 popular apps tested by the Journal, they found at least 11 apps that sent potentially sensitive information to Facebook.

The apps included the period-tracking app Flo Period & Ovulation Tracker, which reportedly shared with Facebook when users were having their periods or when they indicated they were trying to get pregnant. Real estate app Realtor reportedly sent Facebook the listing information viewed by users, and the top heart-rate app on Apple’s iOS, Instant Heart Rate: HR Monitor, sent users’ heart rates to the company, the Journal’s testing found.

The apps reportedly send the data using Facebook’s software-development kit, or SDK, which help developers integrate certain features into their apps. Facebook’s SDK includes an analytics service that helps app developers understand its users’ trends. The Journal said developers who sent sensitive information to Facebook used “custom app events” to send data like ovulation times and homes that users had marked as favorites on some apps.

A Facebook spokesperson told CNBC, “Sharing information across apps on your iPhone or Android device is how mobile advertising works and is industry standard practice. The issue is how apps use information for online advertising. We require app developers to be clear with their users about the information they are sharing with us, and we prohibit app developers from sending us sensitive data. We also take steps to detect and remove data that should not be shared with us.”

Source: Facebook receives personal health data from apps: WSJ

Massive Database Leak Gives Us a Window into China’s Digital Surveillance State

Earlier this month, security researcher Victor Gevers found and disclosed an exposed database live-tracking the locations of about 2.6 million residents of Xinjiang, China, offering a window into what a digital surveillance state looks like in the 21st century.

Xinjiang is China’s largest province, and home to China’s Uighurs, a Turkic minority group. Here, the Chinese government has implemented a testbed police state where an estimated 1 million individuals from these minority groups have been arbitrarily detained. Among the detainees are academics, writers, engineers, and relatives of Uighurs in exile. Many Uighurs abroad worry for their missing family members, who they haven’t heard from for several months and, in some cases, over a year.

Although relatively little news gets out of Xinjiang to the rest of the world, we’ve known for over a year that China has been testing facial-recognition tracking and alert systems across Xinjiang and mandating the collection of biometric data—including DNA samples, voice samples, fingerprints, and iris scans—from all residents between the ages of 12 and 65. Reports from the province in 2016 indicated that Xinjiang residents can be questioned over the use of mobile and Internet tools; just having WhatsApp or Skype installed on your phone is classified as “subversive behavior.” Since 2017, the authorities have instructed all Xinjiang mobile phone users to install a spyware app in order to “prevent [them] from accessing terrorist information.”

The prevailing evidence of mass detention centers and newly-erected surveillance systems shows that China has been pouring billions of dollars into physical and digital means of pervasive surveillance in Xinjiang and other regions. But it’s often unclear to what extent these projects operate as real, functional high-tech surveillance, and how much they are primarily intended as a sort of “security theater”: a public display of oppression and control to intimidate and silence dissent.

Now, this security leak shows just how extensively China is tracking its Xinjiang residents: how parts of that system work, and what parts don’t. It demonstrates that the surveillance is real, even as it raises questions about the competence of its operators.

A Brief Window into China’s Digital Police State

Earlier this month, Gevers discovered an insecure MongoDB database filled with records tracking the location and personal information of 2.6 million people located in the Xinjiang Uyghur Autonomous Region. The records include individuals’ national ID number, ethnicity, nationality, phone number, date of birth, home address, employer, and photos.

Over a period of 24 hours, 6.7 million individual GPS coordinates were streamed to and collected by the database, linking individuals to various public camera streams and identification checkpoints associated with location tags such as “hotel,” “mosque,” and “police station.” The GPS coordinates were all located within Xinjiang.

This database is owned by the company SenseNets, a private AI company advertising facial recognition and crowd analysis technologies.

A couple of days later, Gevers reported a second open database tracking the movement of millions of cars and pedestrians. Violations like jaywalking, speeding, and going through a red-light are detected, trigger the camera to take a photo, and ping a WeChat API, presumably to try and tie the event to an identity.

Database Exposed to Anyone with an Internet Connection for Half a Year

China may have a working surveillance program in Xinjiang, but it’s a shockingly insecure security state. Anyone with an Internet connection had access to this massive honeypot of information.

Gevers also found evidence that these servers were previously accessed by other known global entities such as a Bitcoin ransomware actor, who had left behind entries in the database. To top it off, this server was also vulnerable to several known exploits.

In addition to this particular surveillance database, a Chinese cybersecurity firm revealed that at least 468 MongoDB servers had been exposed to the public Internet after Gevers and other security researchers started reporting them. Among these instances: databases containing detailed information about remote access consoles owned by China General Nuclear Power Group, and GPS coordinates of bike rentals.

A Model Surveillance State for China

China, like many other state actors, may simply be willing to tolerate sloppy engineering if its private contractors can reasonably claim to be delivering the goods. Last year, the government spent an extra $3 billion on security-related construction in Xinjiang, and the New York Times reported that China’s police planned to spend an additional $30 billion on surveillance in the future. Even poorly-executed surveillance is massively expensive, and Beijing is no doubt telling the people of Xinjiang that these investments are being made in the name of their own security. But the truth, revealed only through security failures and careful security research, tells a different story: China’s leaders seem to care little for the privacy, or the freedom, of millions of its citizens.

Source: Massive Database Leak Gives Us a Window into China’s Digital Surveillance State | Electronic Frontier Foundation

Samsung is loading McAfee antivirus software on smart TVs – which may be impossible to uninstall

Samsung is adding bloatware to its 2019 TVs because McAfee is paying them to do so. There is arguably no reason for Samsung to offer third-party antivirus software for an operating system that is developed in-house.

Partnering with software vendors is fairly common practice for large hardware manufacturers. Laptop makers frequently pre-install bloatware in return for some sizable payouts and smartphone OEMs are no different. Samsung is now installing McAfee antivirus software on its 2019 TV lineup.

Samsung is claiming something to the effect of wanting to protect users from malware. On the surface that makes sense, but Samsung is running its very own Tizen OS on all TVs. Instead of adding more junk to a TV, why not just improve the OS? The answer though is self-explanatory. Samsung would not receive a payout from McAfee if it did not install the unneeded software.

Officially, here is Samsung’s statement on the matter.

McAfee extended its contract to have McAfee Security for TV technology pre-installed on all Samsung Smart TVs produced in 2019. Along with being the market leader in the Smart TV category worldwide, Samsung is also the first company to pre-install security on these devices, underscoring its commitment to building security in from the start. McAfee Security for TV scans the apps that run on Samsung smart TVs to identify and remove malware.

Downloading and installing apps on most TVs is a tedious process that most users are not doing very frequently. Well known apps such as Netflix and Hulu come pre-installed on most TVs regardless of brand, making it unnecessary for most users to ever even look at what other apps are available.

It may not be a big deal to have extra bloatware on a TV, but it is something undesirable and might burn a little more power for no actual benefit. If someone is going to take the time to target Tizen with malware and knowing that McAfee is pre=installed, there is little reason to believe a developer would not take the extra time to ensure detection does not happen.

Source: Samsung is loading McAfee antivirus software on smart TVs – TechSpot

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/bKgf5PaBzyg” frameborder=”0″ allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

China bans 23m from buying travel tickets as part of ‘social credit’ system.

China has blocked millions of “discredited” travellers from buying plane or train tickets as part of the country’s controversial “social credit” system aimed at improving the behaviour of citizens.

According to the National Public Credit Information Centre, Chinese courts banned would-be travellers from buying flights 17.5 million times by the end of 2018. Citizens placed on black lists for social credit offences were prevented from buying train tickets 5.5 million times. The report released last week said: “Once discredited, limited everywhere”.

The social credit system aims to incentivise “trustworthy” behaviour through penalties as well as rewards. According to a government document about the system dating from 2014, the aim is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”

Social credit offences range from not paying individual taxes or fines to spreading false information and taking drugs. More minor violations include using expired tickets, smoking on a train or not walking a dog on a leash.

[…]

According to the report, other penalties for individuals include being barred from buying insurance, real estate or investment products. Companies on the blacklist are banned from bidding on projects or issuing corporate bonds.

The report said authorities collected more than 14m data points of “untrustworthy conduct” last year, including scams, unpaid loans, false advertising and occupying reserved seats on a train.

Source: China bans 23m from buying travel tickets as part of ‘social credit’ system | World news | The Guardian

Surprise! Facebook Low-Balled the Percentage of young teens It Paid to Install Spyware – by a factor of 4!

In January, when news first broke that Facebook had been paying teens in gift cards to let it install what is, by definition, essentially spyware on their phones, it seemed like just another Tuesday. Had it been virtually any other company, the outrage would have been tenfold. After all, paying 13-year-olds to gain access to their mobile app usage and browser traffic is, on its face, an unconscionably creepy way for a business to gather intelligence about its competitors. But this shameless undertaking is now precisely the kind of dissolute conduct we’ve come to expect from the occupants 1 Hacker Way.

Facebook’s moral turpitude aside, it’s now come to light that the company also initially underreported the percentage of teens that it had paid to become lab rats, while falsely stating that parental consent forms were required.

Citing responses from the company to questions posed by Sen. Mark Warner, TechCrunch reports that Facebook now claims “about 18 percent” of the people it convinced to download the “Facebook Research App” were teens. This, as opposed to the “5 percent” figure the company provided reporters over a month ago.

Source: Surprise! Facebook Low-Balled the Percentage of Teens It Paid to Install Spyware

As China frightens Europe’s data protectors, America does too with Cloud Act

A foreign power with possible unbridled access to Europe’s data is causing alarm in the region. No, it’s not China. It’s the United States.

As the US pushes ahead with the “Cloud Act” it enacted about a year ago, Europe is scrambling to curb its reach. Under the act, all US cloud service providers, from Microsoft and IBM to Amazon – when ordered – have to provide American authorities with data stored on their servers, regardless of where it’s housed. With those providers controlling much of the cloud market in Europe, the act could potentially give the US the right to access information on large swaths of the region’s people and companies.

The US says the act is aimed at aiding investigations. But some people are drawing parallels between the legislation and the National Intelligence Law that China put in place in 2017 requiring all its organisations and citizens to assist authorities with access to information. The Chinese law, which the US says is a tool for espionage, is cited by President Donald Trump’s administration as a reason to avoid doing business with companies like Huawei Technologies.

“I don’t mean to compare US and Chinese laws, because obviously they aren’t the same, but what we see is that on both sides, Chinese and American, there is clearly a push to have extraterritorial access to data,” said Ms Laure de la Raudiere, a French lawmaker who co-heads a parliamentary cyber-security and sovereignty group.

“This must be a wake up call for Europe to accelerate its own, sovereign offer in the data sector.”

Source: As Huawei frightens Europe’s data protectors, America does too, Europe News & Top Stories – The Straits Times

Some American Airlines In-Flight TVs Have Cameras In Them watching you, just like Singapore Airlines and Google Nest

A viral photo showing a camera in a Singapore Airlines in-flight TV display recently caused an uproar online. The image was retweeted hundreds of times, with many people expressing concern about the privacy implications. As it turns out, some seat-back screens in American Airlines’ premium economy class have them, too.

Sri Ray was aboard an American Airlines Boeing 777-200 flight to Tokyo in September 2018 when he noticed something strange: a camera embedded in the seat back of his entertainment system.

Courtesy of Sri Ray

“I am what one would call security paranoid,” said Ray, who was formerly a site reliability engineer at BuzzFeed. “I observe tech in day-to-day life and wonder how a malicious person can use it in bad ways. When I looked at the shiny new screens in the new premium economy cabin of AA, I noticed a small circle at the bottom. Upon closer inspection, it was definitely a camera.”

The cameras are also visible in this June 2017 review of the airline’s premium economy offering by the Points Guy, as well as this YouTube video by Business Traveller magazine.

American Airlines spokesperson Ross Feinstein confirmed to BuzzFeed News that cameras are present on some of the airlines’ in-flight entertainment systems, but said “they have never been activated, and American is not considering using them.” Feinstein added, “Cameras are a standard feature on many in-flight entertainment systems used by multiple airlines. Manufacturers of those systems have included cameras for possible future uses, such as hand gestures to control in-flight entertainment.”

Source: Some American Airlines In-Flight TVs Have Cameras In Them

why does Singapore Airlines have an embedded camera looking at you on the inflight entertainment system? Just like the Google Nest spy, they say it’s ummm all ok, nothing to see here.

Given Singapore’s reputation for being an unabashed surveillance state, a passenger on a Singapore Airlines (SIA) flight could be forgiven for being a little paranoid.

Vitaly Kamluk, an information security expert and a high-ranking executive of cybersecurity company Kaspersky Lab, went on Twitter with concerns about an embedded camera in SIA’s inflight entertainment systems. He tagged SIA in his post on Sunday, asking the airline to clarify how the camera is being used.

SIA quickly responded, telling Kamluk that the cameras have been disabled, with no plans to use them in the future. While not all of their devices sport the camera, SIA said that some of its newer inflight entertainment systems come with cameras embedded in the hardware. Left unexplained was how the camera-equipped entertainment systems had come to be purchased in the first place.

In another tweet, SIA affirmed that the cameras were already built in by the original equipment manufacturers in newer inflight entertainment systems.

Kamluk recommended that it’s best to disable the cameras physically — with stickers, for example — to provide better peace of mind.

Could cameras built into inflight entertainment systems actually be used as a feature though? It’s possible, according to Panasonic Avionics. Back in 2017, the inflight entertainment device developer mentioned that it was studying how eye tracking can be used for a better passenger experience. Cameras can be used for identity recognition on planes, which in turn, would allow for in-flight biometric payment (much like Face ID on Apple devices) and personalized services.

It’s a long shot, but SIA could actually utilize such systems in the future. The camera’s already there, anyway.

Source: Cybersecurity expert questions existence of embedded camera on SIA’s inflight entertainment systems

The EU Just Finalized Copyright Legislation That breaks the Web, despite EU country opposition

The last time the EU tweaked its copyright laws was in 2001, so the idea of updating regulations in the information age made a lot of sense. But critics became alarmed by two sections of the bill: Article 11 (aka the “link tax”) and Article 13 (aka the “upload filters”). In 2018, critics like Tim Berners-Lee, the inventor of the world wide web, began to warn that these portions of the legislation would have dire and unintended consequences.

Lawmakers hope to wrestle away some of the power that has been gobbled up by tech giants like Facebook and redirect money to struggling copyright holders and publications. Unfortunately, the law may create an environment that’s only navigable by the richest and most powerful organizations. As Wikipedia founder Jimmy Wales put it, “This is a complete disaster.”

[…]

If you’ve read our previous explanations of the problems with the copyright directive, congratulations, you’re mostly caught up. The biggest issues remain the same, though Electronic Frontier Foundation adviser Cory Doctorow called this new version “the worst one yet.”

The final text of Article 11 still seeks to impose a “link tax” on platforms whenever they use a hyperlink to a news publication and quote a short snippet of text. Even a small business or individual running a monetized blog could face penalties for linking to an article and reproducing “single words or very short extracts” from the text without first acquiring a license.

The idea is to get a company like Google to cough up money that would be redirected to news outlets. But Google has said it may just shut down Google News in the EU, just as it did in Spain when similar legislation was implemented in that country. Publishers would lose the traffic boost they get from users being directed to their sites from Google News. And perhaps most importantly, smaller platforms and individuals will be discouraged from sharing and quoting information. According to Julia Reda, a member of European Parliament from Germany, “we will have to wait and see how courts interpret what ‘very short’ means in practice – until then, hyperlinking (with snippets) will be mired in legal uncertainty.”

Article 13 still requires platforms to do everything possible to prevent users from uploading copyrighted materials. We’ve become used to systems like YouTube’s that comply with takedown notices after a user has submitted content that doesn’t belong to them. But the EU wants platforms to stop it before it happens. It will be virtually impossible for even the biggest companies to comply with this directive.

Under the legislation, any platform will have to use upload filters to catch offending material. YouTube spends millions of dollars trying to perfect its system, and it’s still absolutely awful. The little guys will presumably have to license some sort of system if building one in-house isn’t an option. And as critics have emphasized from the beginning, paranoid webmasters will simply clamp down hard on anything that could possibly get themselves in trouble. Who would want to go to court to defend the fair use of a user-submitted Stranger Things meme?

The finalized text of Article 13 also stipulates that platforms will be held liable for any copyright violations unless they demonstrate that they made “best efforts to obtain an authorisation.” If something slips by and the platform shows it did everything it could to prevent it, a platform can be given a pass as long as it acts “expeditiously” to remove the offending content and make “best efforts to prevent” any future occurrences. That leaves a good bit of room for interpretation, but MEP Reda interprets the rules to mean the only safe solution is to do everything in their power to “preemptively buy licences for anything that users may possibly upload – that is: all copyrighted content in the world.”

Source: The EU Just Finalized Copyright Legislation That Rewrites the Rules of the Web

One click and you’re out: UK makes it an offence to view terrorist propaganda even once

It will be an offence to view terrorist material online just once – and could incur a prison sentence of up to 15 years – under new UK laws.

The Counter-Terrorism and Border Security Bill was granted Royal Assent yesterday, updating a previous Act and bringing new powers to law enforcement to tackle terrorism.

But a controversial inclusion was to update the offence of obtaining information “likely to be useful to a person committing or preparing an act of terrorism” so that it now covers viewing or streaming content online.

The rules as passed into law are also a tightening of proposals that had already been criticised by human rights groups and the independent reviewer of terrorism legislation, Max Hill.

Originally, the proposal had been to make it an offence for someone to view material three or more times – but the three strikes idea has been dropped from the final Act.

The law has also increased the maximum penalty for some types of preparatory terrorism offences, including the collection of terrorist information, to 15 years’ imprisonment.

[…]

In the summer, when the proposals were for multiple clicks, terrorism law reviewer Max Hill (no relation to your correspondent) told the Joint Committee on Human Rights that the “the mesh of the net the government is creating… is far too fine and will catch far too many people”.

He also pointed out that the offence could come with a long sentence as the draft bill also extends the maximum penalties to 15 years’ imprisonment.

Corey Stoughton of rights campaigner Liberty echoed these concerns, and said the law should not cover academics and journalists, but should also exempt people who were viewing to gain a better understanding of the issues, or did so “out of foolishness or poor judgement”.

The UN’s special rapporteur on privacy, Joseph Cannataci, has also slammed the plans, saying the rule risked “pushing a bit too much towards thought crime”.

At an event during his visit to the UK, Cannataci said “the difference between forming the intention to do something and then actually carrying out the act is still fundamental to criminal law… here you’re saying: ‘You’ve read it three times so you must be doing something wrong’.”

The government said the law still provides for the existing “reasonable excuse defence”, which includes circumstances where a person “did not know, and had no reason to believe” the material acccessed contained terrorist propaganda.

“Once a defendant has raised this defence, the burden of proof (to the criminal standard) to disprove this defence will rest with the prosecution,” the Home Office’s impact assessment said.

Source: One click and you’re out: UK makes it an offence to view terrorist propaganda even once

Many popular iPhone apps secretly record your screen without asking

Many major companies, like Air Canada, Hollister and Expedia, are recording every tap and swipe you make on their iPhone apps. In most cases you won’t even realize it. And they don’t need to ask for permission.

You can assume that most apps are collecting data on you. Some even monetize your data without your knowledge. But TechCrunch has found several popular iPhone apps, from hoteliers, travel sites, airlines, cell phone carriers, banks and financiers, that don’t ask or make it clear — if at all — that they know exactly how you’re using their apps.

Worse, even though these apps are meant to mask certain fields, some inadvertently expose sensitive data.

Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers.

Or, as Glassbox said in a recent tweet: “Imagine if your website or mobile app could see exactly what your customers do in real time, and why they did it?”

Source: Many popular iPhone apps secretly record your screen without asking | TechCrunch

The “Do Not Track” Setting Doesn’t Stop You from Being Tracked – by Google, Facebook and Twitter, among many more

Most browsers have a “Do Not Track” (DNT) setting that sends “a special signal to websites, analytics companies, ad networks, plug in providers, and other web services you encounter while browsing, to stop tracking your activity.” Sounds good, right? Sadly, it’s not effective. That’s because this Do Not Track setting is only a voluntary signal sent to websites, which websites don’t have to respect 😧.

Screenshot showing the Do Not Track setting in the Chrome browser

Nevertheless, a hefty portion of users across many browsers use the Do Not Track setting. While DNT is disabled by default in most major web browsers, in a survey we conducted of 503 U.S. adults in Nov 2018, 23.1% (±3.7) of respondents have consciously enabled the DNT setting on their desktop browsers. (Note: Apple is in the process of removing the DNT setting from Safari.)

Graph showing survey responses about the current status of the Do Not Track setting in respondent's primary desktop browser

We also looked at DNT usage on DuckDuckGo (across desktop and mobile browsers), finding that 24.4% of DuckDuckGo requests during a one day period came from browsers with the Do Not Track setting enabled. This is within the margin of error from the survey, thus lending more credibility to its results.

[…]

It can be alarming to realize that Do Not Track is about as foolproof as putting a sign on your front lawn that says “Please, don’t look into my house” while all of your blinds remain open. In fact, most major tech companies, including Google, Facebook, and Twitter, do not respect the Do Not Track setting when you visit and use their sites – a fact of which 77.3% (±3.6) of U.S. adults overall weren’t aware.

There is simply a huge discrepancy between the name of the setting and what it actually does. It’s inherently misleading. When educated about the true function and limitation of the DNT setting, 75.5% (±3.8) of U.S. adults say it’s “important” or “very important” that these companies “respect the Do Not Track signal when it is enabled.” So, in shocking news, when people say they don’t want to be tracked, they really don’t want to be tracked.

Pie chart showing 75.5 percent of respondents believe it's important that major tech companies respect the Do Not Track signal.

As a matter of fact, 71.9% (±3.9) of U.S. adults “somewhat favor” or “strongly favor” a federal regulation requiring companies to respect the Do Not Track signal.

Pie chart showing 71.9 percent of respondents would favor federal regulation requiring companies and their websites to respect the Do Not Track signal when enabled.

We agree and hope that governments will focus this year on efforts to enforce adherence to the Do Not Track setting when users enable it. As we’ve seen here and in our private browsing research, many people seek the most readily available (though often, unfortunately, ineffective) methods to protect their privacy.

Source: The “Do Not Track” Setting Doesn’t Stop You from Being Tracked

I’m a crime-fighter, says FamilyTreeDNA boss after being caught giving folks’ DNA data to FBI

Some would argue he has broken every ethical and moral rule of his in his profession, but genealogist Bennett Greenspan prefers to see himself as a crime-fighter.

“I spent many, many nights and many, many weekends thinking of what privacy and confidentiality would mean to a genealogist such as me,” the founder and president of FamilyTreeDNA says in a video that appeared online yesterday.

He continues: “I would never do anything to betray the trust of my customers and at the same time I felt it important to enable my customers to crowd source the catching of criminals.”

The video and surrounding press release went out at 10.30pm on Thursday. Funnily enough, just a couple of hours earlier, BuzzFeed offered a very different take on Greenspan’s philanthropy. “One Of The Biggest At-Home DNA Testing Companies Is Working With The FBI,” reads the headline.

Here’s how FamilyTreeDNA works, if you don’t know: among other features, you submit a sample of your DNA to the biz, and it will tell you if you’re related to someone else who has also submitted their genetic blueprint. It’s supposed to find previously unknown relatives, check parentage, and so on.

And so, by crowd sourcing, what Greenspan means is that he has reached an agreement with the FBI to allow the agency to create new profiles on his system using DNA collected from, say, corpses, crime scenes, and suspects. These can then be compared with genetic profiles in the company’s database to locate and track down relatives of suspects and victims, if not the suspects and victims themselves.

[…]

Those profiles have been built by customers who have paid between $79 and $199 to have their generic material analyzed, in large part to understand their personal history and sometimes find connections to unknown family members. The service and others like it have become popular with adopted children who wish to locate birth parents but are prevented from being given by the information by law.

However, there is a strong expectation that any company storing your most personal generic information will apply strict confidentiality rules around it. You could argue that handing it over to the Feds doesn’t meet that standard. Greenspan would disagree.

“Greenspan created FamilyTreeDNA to help other family researchers solve problems and break down walls to connect the dots of their family trees,” reads a press release rushed out to head off, in vain, any terrible headlines.

“Without realizing it, he had inadvertently created a platform that, nearly two decades later, would help law enforcement agencies solve violent crimes faster than ever.”

Crime fighting, it seems, overrides all other ethical considerations.

Unfortunately for Greenspan, the rest of his industry doesn’t agree. The Future of Privacy Forum, an organization that maintains a list of consumer DNA testing companies that have signed up to its privacy guidelines struck FamilyTreeDNA off its list today.

Its VP of policy, John Verdi, told Bloomberg that the deal between FamilyTreeDNA and the FBI was “deeply flawed.” He went on: “It’s out of line with industry best practices, it’s out of line with what leaders in the space do, and it’s out of line with consumer expectations.”

Source: I’m a crime-fighter, says FamilyTreeDNA boss after being caught giving folks’ DNA data to FBI • The Register

Officer jailed for using police database to access personal details of dozens of Tinder dates

A former long-serving police officer has been jailed for six months for illegally accessing the personal details of almost 100 women to determine if they were “suitable” dates.

Adrian Trevor Moore was a 28-year veteran of WA Police and was nominated as police officer of the year in 2011.

The former senior constable pleaded guilty to 180 charges of using a secure police database to access the information of 92 women he had met, or interacted with, on dating websites including Tinder and Plenty of Fish.

A third of the women were checked by Moore multiple times over several years.

Source: Officer jailed for using police database to access personal details of dozens of Tinder dates – ABC News (Australian Broadcasting Corporation)

Well, that’s what you get when you collect loads of personal data in a database.

Nest Secure has an unlisted disabled microphone (Edit: Google statement agrees!)

We received a statement from Google regarding the implication that the Nest Secure alarm system has had an unlisted microphone this whole time. It turns out that yes, the Nest Guard base system (the circular device with a keypad above) does have a built-in microphone that is not listed on the official spec sheet at Nest’s site. The microphone has been in an inactive state since the release of the Nest Secure, according to Google.

This unlisted mic is how the Nest Guard will be able to operate as a pseudo-Google Home with just a software update, as detailed below.

[…]

Once the Google Assistant is enabled, the mic is always on but only listening for the hotwords “Ok Google” or “Hey Google”. Google only stores voice-based queries after it recognizes those hotwords. Voice data and query contents are sent to Google servers for analysis and storage in My Activity.

[…]

Original Article, February 4, 2019 (02:20 PM ET): Owners of the Nest Secure alarm system have been able to use voice commands to control their home security through Google Assistant for a while now. However, to issue those commands, they needed a separate Google Assistant-powered device, like a smartphone or a Google Home smart speaker.

The reason for this limitation has always seemed straightforward: according to the official tech specs, there’s no onboard microphone in the Nest Secure system.

Source: Nest Secure has an unlisted disabled microphone (Edit: Google statement)

That’s pretty damn creepy

Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data

Facebook has yet again vowed to “do better” after it was caught secretly bypassing Apple’s privacy rules to pay adults and teenagers to install a data-slurping iOS app on their phones.

The increasingly worthless promises of the social media giant have fallen on deaf ears however: on Wednesday, Apple revoked the company’s enterprise certificate for its internal non-public apps, and one lawmaker vowed to reintroduce legislation that would make it illegal for Facebook to carry out such “research” in future.

The enterprise cert allows Facebook to sign iOS applications so they can be installed for internal use only, without having to go through the official App Store. It’s useful for intranet applications and in-house software development work.

Facebook, though, used the certificate to sign a market research iPhone application that folks could install it on their devices. The app was previously kicked out of the official App Store for breaking Apple’s rules on privacy: Facebook had to use the cert to skirt Cupertino’s ban.

[…]

With its certificate revoked, Facebook employees are reporting that their legitimate internal apps, also signed by the cert, have stopped working. The consumer iOS Facebook app is unaffected.

Trust us, we’re Facebook!

At the heart of the issue is an app for iPhones called “Facebook Research” that the company advertised through third parties. The app is downloaded outside of the normal Apple App Store, and gives Facebook extraordinary access to a user’s phone, allowing the company to see pretty much everything that person does on their device. For that trove of personal data, Facebook paid an unknown number of users aged between 13 and 35 up to $20 a month in e-gifts.

Source: Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data • The Register

A person familiar with the situation tells The Verge that early versions of Facebook, Instagram, Messenger, and other pre-release “dogfood” (beta) apps have stopped working, as have other employee apps, like one for transportation. Facebook is treating this as a critical problem internally, we’re told, as the affected apps simply don’t launch on employees’ phones anymore.

https://www.theverge.com/2019/1/30/18203551/apple-facebook-blocked-internal-ios-apps

 

Facebook pays teens to install VPN that spies on them

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms. Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Source: Facebook pays teens to install VPN that spies on them | TechCrunch

Apple: You can’t sue us for slowing down your iPhones because we’re like a contractor in your house

Apple is like a building contractor you hire to redo your kitchen, the tech giant has argued in an attempt to explain why it shouldn’t have to pay customers for slowing down their iPhones.

Addressing a bunch of people trying to sue it for damages, the iGiant’s lawyers told [PDF] a California court this month: “Plaintiffs are like homeowners who have let a building contractor into their homes to upgrade their kitchens, thus giving permission for the contractor to demolish and change parts of the houses.”

They went on: “Any claim that the contractor caused excessive damage in the process sounds in contract, not trespass.”

[…]

In this particular case in the US, the plaintiffs argue that Apple damaged their phones by effectively forcing them to install software updates that were intended to fix the battery issues. They may have “chosen” to install the updates by tapping on the relevant buttons, but they did so after reading misleading statements about what the updates were and what they would do, the lawsuit claims.

Nonsense! says Apple. You invited us into your house. We did some work. Sorry you don’t like the fact that we knocked down the wall to the lounge and installed a new air vent through the ceiling, but that’s just how it is.

[…]

But that’s not the only disturbing image to emerge from this lawsuit. When it was accused of damaging people’s property by ruining their batteries, Apple argued – successfully – in court that consumers can’t reasonably expect their iPhone batteries to last longer than a year, given that its battery warranty runs out after 12 months. That would likely come as news to iPhone owners who don’t typically expect to spend $1,000 on a phone and have it die on them a year later.

Call of Duty

Apple has also argued that it’s not under any obligation to tell people buying its products about how well its batteries and software function. An entire section of the company’s motion to dismiss this latest lawsuit is titled: “Apple had no duty to disclose the facts regarding software capability and battery capacity.”

Of course, the truth is that Apple knows that it screwed up – and screwed up badly. Which is why last year it offered replacement batteries for just $29 rather than the usual $79. Uptake of the “program” was so popular that analysts say it has accounted for a significant drop-off in new iPhone purchases.

[…]

Ultimately of course, Apple remains convinced that it’s not really your phone at all: Cupertino has been good enough to allow you to use its amazing technology, and all you had to do was pay it a relatively small amount of money.

We should all be grateful that Apple lets us use our iPhones at all. And if it wants to slow them down, it can damn well slow them down without having to tell you because you wouldn’t understand the reasons why even if it bothered to explain them to you.

Source: Apple: You can’t sue us for slowing down your iPhones because you, er, invited us into, uh, your home… we can explain • The Register

This kind of reasoning beggars belief

Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

Most of the data collected by urban planners is messy, complex, and difficult to represent. It looks nothing like the smooth graphs and clean charts of city life in urban simulator games like “SimCity.” A new initiative from Sidewalk Labs, the city-building subsidiary of Google’s parent company Alphabet, has set out to change that.

The program, known as Replica, offers planning agencies the ability to model an entire city’s patterns of movement. Like “SimCity,” Replica’s “user-friendly” tool deploys statistical simulations to give a comprehensive view of how, when, and where people travel in urban areas. It’s an appealing prospect for planners making critical decisions about transportation and land use. In recent months, transportation authorities in Kansas City, Portland, and the Chicago area have signed up to glean its insights. The only catch: They’re not completely sure where the data is coming from.

Typical urban planners rely on processes like surveys and trip counters that are often time-consuming, labor-intensive, and outdated. Replica, instead, uses real-time mobile location data. As Nick Bowden of Sidewalk Labs has explained, “Replica provides a full set of baseline travel measures that are very difficult to gather and maintain today, including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school).”

To make these measurements, the program gathers and de-identifies the location of cellphone users, which it obtains from unspecified third-party vendors. It then models this anonymized data in simulations — creating a synthetic population that faithfully replicates a city’s real-world patterns but that “obscures the real-world travel habits of individual people,” as Bowden told The Intercept.

The program comes at a time of growing unease with how tech companies use and share our personal data — and raises new questions about Google’s encroachment on the physical world.

If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work?

Last month, the New York Times revealed how sensitive location data is harvested by third parties from our smartphones — often with weak or nonexistent consent provisions. A Motherboard investigation in early January further demonstrated how cell companies sell our locations to stalkers and bounty hunters willing to pay the price.

For some, the Google sibling’s plans to gather and commodify real-time location data from millions of cellphones adds to these concerns. “The privacy concerns are pretty extreme,” Ben Green, an urban technology expert and author of “The Smart Enough City,” wrote in an email to The Intercept. “Mobile phone location data is extremely sensitive.” These privacy concerns have been far from theoretical. An Associated Press investigation showed that Google’s apps and website track people even after they have disabled the location history on their phones. Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were turned off. The company has also been caught using its Street View vehicles to collect the Wi-Fi location data from phones and computers.

This is why Sidewalk Labs has instituted significant protections to safeguard privacy, before it even begins creating a synthetic population. Any location data that Sidewalk Labs receives is already de-identified (using methods such as aggregation, differential privacy techniques, or outright removal of unique behaviors). Bowden explained that the data obtained by Replica does not include a device’s unique identifiers, which can be used to uncover someone’s unique identity.

However, some urban planners and technologists, while emphasizing the elegance and novelty of the program’s concept, remain skeptical about these privacy protections, asking how Sidewalk Labs defines personally identifiable information. Tamir Israel, a staff lawyer at the Canadian Internet Policy & Public Interest Clinic, warns that re-identification is a rapidly moving target. If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work? “We see a lot of companies erring on the side of collecting it and doing coarse de-identifications, even though, more than any other type of data, location data has been shown to be highly re-identifiable,” he added. “It’s obvious what home people leave and return to every night and what office they stop at every day from 9 to 5 p.m.” A landmark study uncovered the extent to which people could be re-identified from seemingly-anonymous data using just four time-stamped data points of where they’ve previously been.

Source: Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

Firefox cracks down on creepy web trackers, holds supercookies over fire whilst Chrome kills ad blockers

The Mozilla Foundation has announced its intent to reduce the ability of websites and other online services to track users of its Firefox browser around the internet.

At this stage, Moz’s actions are baby steps. In support of its decision in late 2018 to reduce the amount of tracking it permits, the organisation has now published a tracking policy to tell people what it will block.

Moz said the focus of the policy is to bring the curtain down on tracking techniques that “cannot be meaningfully understood or controlled by users”.

Notoriously intrusive tracking techniques allow users to be followed and profiled around the web. Facebook planting trackers wherever a site has a “Like” button is a good example. A user without a Facebook account can still be tracked as a unique individual as they visit different news sites.

Mozilla’s policy said these “stateful identifiers are often used by third parties to associate browsing across multiple websites with the same user and to build profiles of those users, in violation of the user’s expectation”. So, out they go.

Source: Mozilla security policy cracks down on creepy web trackers, holds supercookies over fire • The Register

I’m pretty sure which browser you should be using