why does Singapore Airlines have an embedded camera looking at you on the inflight entertainment system? Just like the Google Nest spy, they say it’s ummm all ok, nothing to see here.

Given Singapore’s reputation for being an unabashed surveillance state, a passenger on a Singapore Airlines (SIA) flight could be forgiven for being a little paranoid.

Vitaly Kamluk, an information security expert and a high-ranking executive of cybersecurity company Kaspersky Lab, went on Twitter with concerns about an embedded camera in SIA’s inflight entertainment systems. He tagged SIA in his post on Sunday, asking the airline to clarify how the camera is being used.

SIA quickly responded, telling Kamluk that the cameras have been disabled, with no plans to use them in the future. While not all of their devices sport the camera, SIA said that some of its newer inflight entertainment systems come with cameras embedded in the hardware. Left unexplained was how the camera-equipped entertainment systems had come to be purchased in the first place.

In another tweet, SIA affirmed that the cameras were already built in by the original equipment manufacturers in newer inflight entertainment systems.

Kamluk recommended that it’s best to disable the cameras physically — with stickers, for example — to provide better peace of mind.

Could cameras built into inflight entertainment systems actually be used as a feature though? It’s possible, according to Panasonic Avionics. Back in 2017, the inflight entertainment device developer mentioned that it was studying how eye tracking can be used for a better passenger experience. Cameras can be used for identity recognition on planes, which in turn, would allow for in-flight biometric payment (much like Face ID on Apple devices) and personalized services.

It’s a long shot, but SIA could actually utilize such systems in the future. The camera’s already there, anyway.

Source: Cybersecurity expert questions existence of embedded camera on SIA’s inflight entertainment systems

The EU Just Finalized Copyright Legislation That breaks the Web, despite EU country opposition

The last time the EU tweaked its copyright laws was in 2001, so the idea of updating regulations in the information age made a lot of sense. But critics became alarmed by two sections of the bill: Article 11 (aka the “link tax”) and Article 13 (aka the “upload filters”). In 2018, critics like Tim Berners-Lee, the inventor of the world wide web, began to warn that these portions of the legislation would have dire and unintended consequences.

Lawmakers hope to wrestle away some of the power that has been gobbled up by tech giants like Facebook and redirect money to struggling copyright holders and publications. Unfortunately, the law may create an environment that’s only navigable by the richest and most powerful organizations. As Wikipedia founder Jimmy Wales put it, “This is a complete disaster.”

[…]

If you’ve read our previous explanations of the problems with the copyright directive, congratulations, you’re mostly caught up. The biggest issues remain the same, though Electronic Frontier Foundation adviser Cory Doctorow called this new version “the worst one yet.”

The final text of Article 11 still seeks to impose a “link tax” on platforms whenever they use a hyperlink to a news publication and quote a short snippet of text. Even a small business or individual running a monetized blog could face penalties for linking to an article and reproducing “single words or very short extracts” from the text without first acquiring a license.

The idea is to get a company like Google to cough up money that would be redirected to news outlets. But Google has said it may just shut down Google News in the EU, just as it did in Spain when similar legislation was implemented in that country. Publishers would lose the traffic boost they get from users being directed to their sites from Google News. And perhaps most importantly, smaller platforms and individuals will be discouraged from sharing and quoting information. According to Julia Reda, a member of European Parliament from Germany, “we will have to wait and see how courts interpret what ‘very short’ means in practice – until then, hyperlinking (with snippets) will be mired in legal uncertainty.”

Article 13 still requires platforms to do everything possible to prevent users from uploading copyrighted materials. We’ve become used to systems like YouTube’s that comply with takedown notices after a user has submitted content that doesn’t belong to them. But the EU wants platforms to stop it before it happens. It will be virtually impossible for even the biggest companies to comply with this directive.

Under the legislation, any platform will have to use upload filters to catch offending material. YouTube spends millions of dollars trying to perfect its system, and it’s still absolutely awful. The little guys will presumably have to license some sort of system if building one in-house isn’t an option. And as critics have emphasized from the beginning, paranoid webmasters will simply clamp down hard on anything that could possibly get themselves in trouble. Who would want to go to court to defend the fair use of a user-submitted Stranger Things meme?

The finalized text of Article 13 also stipulates that platforms will be held liable for any copyright violations unless they demonstrate that they made “best efforts to obtain an authorisation.” If something slips by and the platform shows it did everything it could to prevent it, a platform can be given a pass as long as it acts “expeditiously” to remove the offending content and make “best efforts to prevent” any future occurrences. That leaves a good bit of room for interpretation, but MEP Reda interprets the rules to mean the only safe solution is to do everything in their power to “preemptively buy licences for anything that users may possibly upload – that is: all copyrighted content in the world.”

Source: The EU Just Finalized Copyright Legislation That Rewrites the Rules of the Web

One click and you’re out: UK makes it an offence to view terrorist propaganda even once

It will be an offence to view terrorist material online just once – and could incur a prison sentence of up to 15 years – under new UK laws.

The Counter-Terrorism and Border Security Bill was granted Royal Assent yesterday, updating a previous Act and bringing new powers to law enforcement to tackle terrorism.

But a controversial inclusion was to update the offence of obtaining information “likely to be useful to a person committing or preparing an act of terrorism” so that it now covers viewing or streaming content online.

The rules as passed into law are also a tightening of proposals that had already been criticised by human rights groups and the independent reviewer of terrorism legislation, Max Hill.

Originally, the proposal had been to make it an offence for someone to view material three or more times – but the three strikes idea has been dropped from the final Act.

The law has also increased the maximum penalty for some types of preparatory terrorism offences, including the collection of terrorist information, to 15 years’ imprisonment.

[…]

In the summer, when the proposals were for multiple clicks, terrorism law reviewer Max Hill (no relation to your correspondent) told the Joint Committee on Human Rights that the “the mesh of the net the government is creating… is far too fine and will catch far too many people”.

He also pointed out that the offence could come with a long sentence as the draft bill also extends the maximum penalties to 15 years’ imprisonment.

Corey Stoughton of rights campaigner Liberty echoed these concerns, and said the law should not cover academics and journalists, but should also exempt people who were viewing to gain a better understanding of the issues, or did so “out of foolishness or poor judgement”.

The UN’s special rapporteur on privacy, Joseph Cannataci, has also slammed the plans, saying the rule risked “pushing a bit too much towards thought crime”.

At an event during his visit to the UK, Cannataci said “the difference between forming the intention to do something and then actually carrying out the act is still fundamental to criminal law… here you’re saying: ‘You’ve read it three times so you must be doing something wrong’.”

The government said the law still provides for the existing “reasonable excuse defence”, which includes circumstances where a person “did not know, and had no reason to believe” the material acccessed contained terrorist propaganda.

“Once a defendant has raised this defence, the burden of proof (to the criminal standard) to disprove this defence will rest with the prosecution,” the Home Office’s impact assessment said.

Source: One click and you’re out: UK makes it an offence to view terrorist propaganda even once

Many popular iPhone apps secretly record your screen without asking

Many major companies, like Air Canada, Hollister and Expedia, are recording every tap and swipe you make on their iPhone apps. In most cases you won’t even realize it. And they don’t need to ask for permission.

You can assume that most apps are collecting data on you. Some even monetize your data without your knowledge. But TechCrunch has found several popular iPhone apps, from hoteliers, travel sites, airlines, cell phone carriers, banks and financiers, that don’t ask or make it clear — if at all — that they know exactly how you’re using their apps.

Worse, even though these apps are meant to mask certain fields, some inadvertently expose sensitive data.

Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers.

Or, as Glassbox said in a recent tweet: “Imagine if your website or mobile app could see exactly what your customers do in real time, and why they did it?”

Source: Many popular iPhone apps secretly record your screen without asking | TechCrunch

The “Do Not Track” Setting Doesn’t Stop You from Being Tracked – by Google, Facebook and Twitter, among many more

Most browsers have a “Do Not Track” (DNT) setting that sends “a special signal to websites, analytics companies, ad networks, plug in providers, and other web services you encounter while browsing, to stop tracking your activity.” Sounds good, right? Sadly, it’s not effective. That’s because this Do Not Track setting is only a voluntary signal sent to websites, which websites don’t have to respect 😧.

Screenshot showing the Do Not Track setting in the Chrome browser

Nevertheless, a hefty portion of users across many browsers use the Do Not Track setting. While DNT is disabled by default in most major web browsers, in a survey we conducted of 503 U.S. adults in Nov 2018, 23.1% (±3.7) of respondents have consciously enabled the DNT setting on their desktop browsers. (Note: Apple is in the process of removing the DNT setting from Safari.)

Graph showing survey responses about the current status of the Do Not Track setting in respondent's primary desktop browser

We also looked at DNT usage on DuckDuckGo (across desktop and mobile browsers), finding that 24.4% of DuckDuckGo requests during a one day period came from browsers with the Do Not Track setting enabled. This is within the margin of error from the survey, thus lending more credibility to its results.

[…]

It can be alarming to realize that Do Not Track is about as foolproof as putting a sign on your front lawn that says “Please, don’t look into my house” while all of your blinds remain open. In fact, most major tech companies, including Google, Facebook, and Twitter, do not respect the Do Not Track setting when you visit and use their sites – a fact of which 77.3% (±3.6) of U.S. adults overall weren’t aware.

There is simply a huge discrepancy between the name of the setting and what it actually does. It’s inherently misleading. When educated about the true function and limitation of the DNT setting, 75.5% (±3.8) of U.S. adults say it’s “important” or “very important” that these companies “respect the Do Not Track signal when it is enabled.” So, in shocking news, when people say they don’t want to be tracked, they really don’t want to be tracked.

Pie chart showing 75.5 percent of respondents believe it's important that major tech companies respect the Do Not Track signal.

As a matter of fact, 71.9% (±3.9) of U.S. adults “somewhat favor” or “strongly favor” a federal regulation requiring companies to respect the Do Not Track signal.

Pie chart showing 71.9 percent of respondents would favor federal regulation requiring companies and their websites to respect the Do Not Track signal when enabled.

We agree and hope that governments will focus this year on efforts to enforce adherence to the Do Not Track setting when users enable it. As we’ve seen here and in our private browsing research, many people seek the most readily available (though often, unfortunately, ineffective) methods to protect their privacy.

Source: The “Do Not Track” Setting Doesn’t Stop You from Being Tracked

I’m a crime-fighter, says FamilyTreeDNA boss after being caught giving folks’ DNA data to FBI

Some would argue he has broken every ethical and moral rule of his in his profession, but genealogist Bennett Greenspan prefers to see himself as a crime-fighter.

“I spent many, many nights and many, many weekends thinking of what privacy and confidentiality would mean to a genealogist such as me,” the founder and president of FamilyTreeDNA says in a video that appeared online yesterday.

He continues: “I would never do anything to betray the trust of my customers and at the same time I felt it important to enable my customers to crowd source the catching of criminals.”

The video and surrounding press release went out at 10.30pm on Thursday. Funnily enough, just a couple of hours earlier, BuzzFeed offered a very different take on Greenspan’s philanthropy. “One Of The Biggest At-Home DNA Testing Companies Is Working With The FBI,” reads the headline.

Here’s how FamilyTreeDNA works, if you don’t know: among other features, you submit a sample of your DNA to the biz, and it will tell you if you’re related to someone else who has also submitted their genetic blueprint. It’s supposed to find previously unknown relatives, check parentage, and so on.

And so, by crowd sourcing, what Greenspan means is that he has reached an agreement with the FBI to allow the agency to create new profiles on his system using DNA collected from, say, corpses, crime scenes, and suspects. These can then be compared with genetic profiles in the company’s database to locate and track down relatives of suspects and victims, if not the suspects and victims themselves.

[…]

Those profiles have been built by customers who have paid between $79 and $199 to have their generic material analyzed, in large part to understand their personal history and sometimes find connections to unknown family members. The service and others like it have become popular with adopted children who wish to locate birth parents but are prevented from being given by the information by law.

However, there is a strong expectation that any company storing your most personal generic information will apply strict confidentiality rules around it. You could argue that handing it over to the Feds doesn’t meet that standard. Greenspan would disagree.

“Greenspan created FamilyTreeDNA to help other family researchers solve problems and break down walls to connect the dots of their family trees,” reads a press release rushed out to head off, in vain, any terrible headlines.

“Without realizing it, he had inadvertently created a platform that, nearly two decades later, would help law enforcement agencies solve violent crimes faster than ever.”

Crime fighting, it seems, overrides all other ethical considerations.

Unfortunately for Greenspan, the rest of his industry doesn’t agree. The Future of Privacy Forum, an organization that maintains a list of consumer DNA testing companies that have signed up to its privacy guidelines struck FamilyTreeDNA off its list today.

Its VP of policy, John Verdi, told Bloomberg that the deal between FamilyTreeDNA and the FBI was “deeply flawed.” He went on: “It’s out of line with industry best practices, it’s out of line with what leaders in the space do, and it’s out of line with consumer expectations.”

Source: I’m a crime-fighter, says FamilyTreeDNA boss after being caught giving folks’ DNA data to FBI • The Register

Officer jailed for using police database to access personal details of dozens of Tinder dates

A former long-serving police officer has been jailed for six months for illegally accessing the personal details of almost 100 women to determine if they were “suitable” dates.

Adrian Trevor Moore was a 28-year veteran of WA Police and was nominated as police officer of the year in 2011.

The former senior constable pleaded guilty to 180 charges of using a secure police database to access the information of 92 women he had met, or interacted with, on dating websites including Tinder and Plenty of Fish.

A third of the women were checked by Moore multiple times over several years.

Source: Officer jailed for using police database to access personal details of dozens of Tinder dates – ABC News (Australian Broadcasting Corporation)

Well, that’s what you get when you collect loads of personal data in a database.

Nest Secure has an unlisted disabled microphone (Edit: Google statement agrees!)

We received a statement from Google regarding the implication that the Nest Secure alarm system has had an unlisted microphone this whole time. It turns out that yes, the Nest Guard base system (the circular device with a keypad above) does have a built-in microphone that is not listed on the official spec sheet at Nest’s site. The microphone has been in an inactive state since the release of the Nest Secure, according to Google.

This unlisted mic is how the Nest Guard will be able to operate as a pseudo-Google Home with just a software update, as detailed below.

[…]

Once the Google Assistant is enabled, the mic is always on but only listening for the hotwords “Ok Google” or “Hey Google”. Google only stores voice-based queries after it recognizes those hotwords. Voice data and query contents are sent to Google servers for analysis and storage in My Activity.

[…]

Original Article, February 4, 2019 (02:20 PM ET): Owners of the Nest Secure alarm system have been able to use voice commands to control their home security through Google Assistant for a while now. However, to issue those commands, they needed a separate Google Assistant-powered device, like a smartphone or a Google Home smart speaker.

The reason for this limitation has always seemed straightforward: according to the official tech specs, there’s no onboard microphone in the Nest Secure system.

Source: Nest Secure has an unlisted disabled microphone (Edit: Google statement)

That’s pretty damn creepy

Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data

Facebook has yet again vowed to “do better” after it was caught secretly bypassing Apple’s privacy rules to pay adults and teenagers to install a data-slurping iOS app on their phones.

The increasingly worthless promises of the social media giant have fallen on deaf ears however: on Wednesday, Apple revoked the company’s enterprise certificate for its internal non-public apps, and one lawmaker vowed to reintroduce legislation that would make it illegal for Facebook to carry out such “research” in future.

The enterprise cert allows Facebook to sign iOS applications so they can be installed for internal use only, without having to go through the official App Store. It’s useful for intranet applications and in-house software development work.

Facebook, though, used the certificate to sign a market research iPhone application that folks could install it on their devices. The app was previously kicked out of the official App Store for breaking Apple’s rules on privacy: Facebook had to use the cert to skirt Cupertino’s ban.

[…]

With its certificate revoked, Facebook employees are reporting that their legitimate internal apps, also signed by the cert, have stopped working. The consumer iOS Facebook app is unaffected.

Trust us, we’re Facebook!

At the heart of the issue is an app for iPhones called “Facebook Research” that the company advertised through third parties. The app is downloaded outside of the normal Apple App Store, and gives Facebook extraordinary access to a user’s phone, allowing the company to see pretty much everything that person does on their device. For that trove of personal data, Facebook paid an unknown number of users aged between 13 and 35 up to $20 a month in e-gifts.

Source: Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data • The Register

A person familiar with the situation tells The Verge that early versions of Facebook, Instagram, Messenger, and other pre-release “dogfood” (beta) apps have stopped working, as have other employee apps, like one for transportation. Facebook is treating this as a critical problem internally, we’re told, as the affected apps simply don’t launch on employees’ phones anymore.

https://www.theverge.com/2019/1/30/18203551/apple-facebook-blocked-internal-ios-apps

 

Facebook pays teens to install VPN that spies on them

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms. Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Source: Facebook pays teens to install VPN that spies on them | TechCrunch

Apple: You can’t sue us for slowing down your iPhones because we’re like a contractor in your house

Apple is like a building contractor you hire to redo your kitchen, the tech giant has argued in an attempt to explain why it shouldn’t have to pay customers for slowing down their iPhones.

Addressing a bunch of people trying to sue it for damages, the iGiant’s lawyers told [PDF] a California court this month: “Plaintiffs are like homeowners who have let a building contractor into their homes to upgrade their kitchens, thus giving permission for the contractor to demolish and change parts of the houses.”

They went on: “Any claim that the contractor caused excessive damage in the process sounds in contract, not trespass.”

[…]

In this particular case in the US, the plaintiffs argue that Apple damaged their phones by effectively forcing them to install software updates that were intended to fix the battery issues. They may have “chosen” to install the updates by tapping on the relevant buttons, but they did so after reading misleading statements about what the updates were and what they would do, the lawsuit claims.

Nonsense! says Apple. You invited us into your house. We did some work. Sorry you don’t like the fact that we knocked down the wall to the lounge and installed a new air vent through the ceiling, but that’s just how it is.

[…]

But that’s not the only disturbing image to emerge from this lawsuit. When it was accused of damaging people’s property by ruining their batteries, Apple argued – successfully – in court that consumers can’t reasonably expect their iPhone batteries to last longer than a year, given that its battery warranty runs out after 12 months. That would likely come as news to iPhone owners who don’t typically expect to spend $1,000 on a phone and have it die on them a year later.

Call of Duty

Apple has also argued that it’s not under any obligation to tell people buying its products about how well its batteries and software function. An entire section of the company’s motion to dismiss this latest lawsuit is titled: “Apple had no duty to disclose the facts regarding software capability and battery capacity.”

Of course, the truth is that Apple knows that it screwed up – and screwed up badly. Which is why last year it offered replacement batteries for just $29 rather than the usual $79. Uptake of the “program” was so popular that analysts say it has accounted for a significant drop-off in new iPhone purchases.

[…]

Ultimately of course, Apple remains convinced that it’s not really your phone at all: Cupertino has been good enough to allow you to use its amazing technology, and all you had to do was pay it a relatively small amount of money.

We should all be grateful that Apple lets us use our iPhones at all. And if it wants to slow them down, it can damn well slow them down without having to tell you because you wouldn’t understand the reasons why even if it bothered to explain them to you.

Source: Apple: You can’t sue us for slowing down your iPhones because you, er, invited us into, uh, your home… we can explain • The Register

This kind of reasoning beggars belief

Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

Most of the data collected by urban planners is messy, complex, and difficult to represent. It looks nothing like the smooth graphs and clean charts of city life in urban simulator games like “SimCity.” A new initiative from Sidewalk Labs, the city-building subsidiary of Google’s parent company Alphabet, has set out to change that.

The program, known as Replica, offers planning agencies the ability to model an entire city’s patterns of movement. Like “SimCity,” Replica’s “user-friendly” tool deploys statistical simulations to give a comprehensive view of how, when, and where people travel in urban areas. It’s an appealing prospect for planners making critical decisions about transportation and land use. In recent months, transportation authorities in Kansas City, Portland, and the Chicago area have signed up to glean its insights. The only catch: They’re not completely sure where the data is coming from.

Typical urban planners rely on processes like surveys and trip counters that are often time-consuming, labor-intensive, and outdated. Replica, instead, uses real-time mobile location data. As Nick Bowden of Sidewalk Labs has explained, “Replica provides a full set of baseline travel measures that are very difficult to gather and maintain today, including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school).”

To make these measurements, the program gathers and de-identifies the location of cellphone users, which it obtains from unspecified third-party vendors. It then models this anonymized data in simulations — creating a synthetic population that faithfully replicates a city’s real-world patterns but that “obscures the real-world travel habits of individual people,” as Bowden told The Intercept.

The program comes at a time of growing unease with how tech companies use and share our personal data — and raises new questions about Google’s encroachment on the physical world.

If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work?

Last month, the New York Times revealed how sensitive location data is harvested by third parties from our smartphones — often with weak or nonexistent consent provisions. A Motherboard investigation in early January further demonstrated how cell companies sell our locations to stalkers and bounty hunters willing to pay the price.

For some, the Google sibling’s plans to gather and commodify real-time location data from millions of cellphones adds to these concerns. “The privacy concerns are pretty extreme,” Ben Green, an urban technology expert and author of “The Smart Enough City,” wrote in an email to The Intercept. “Mobile phone location data is extremely sensitive.” These privacy concerns have been far from theoretical. An Associated Press investigation showed that Google’s apps and website track people even after they have disabled the location history on their phones. Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were turned off. The company has also been caught using its Street View vehicles to collect the Wi-Fi location data from phones and computers.

This is why Sidewalk Labs has instituted significant protections to safeguard privacy, before it even begins creating a synthetic population. Any location data that Sidewalk Labs receives is already de-identified (using methods such as aggregation, differential privacy techniques, or outright removal of unique behaviors). Bowden explained that the data obtained by Replica does not include a device’s unique identifiers, which can be used to uncover someone’s unique identity.

However, some urban planners and technologists, while emphasizing the elegance and novelty of the program’s concept, remain skeptical about these privacy protections, asking how Sidewalk Labs defines personally identifiable information. Tamir Israel, a staff lawyer at the Canadian Internet Policy & Public Interest Clinic, warns that re-identification is a rapidly moving target. If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work? “We see a lot of companies erring on the side of collecting it and doing coarse de-identifications, even though, more than any other type of data, location data has been shown to be highly re-identifiable,” he added. “It’s obvious what home people leave and return to every night and what office they stop at every day from 9 to 5 p.m.” A landmark study uncovered the extent to which people could be re-identified from seemingly-anonymous data using just four time-stamped data points of where they’ve previously been.

Source: Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

Firefox cracks down on creepy web trackers, holds supercookies over fire whilst Chrome kills ad blockers

The Mozilla Foundation has announced its intent to reduce the ability of websites and other online services to track users of its Firefox browser around the internet.

At this stage, Moz’s actions are baby steps. In support of its decision in late 2018 to reduce the amount of tracking it permits, the organisation has now published a tracking policy to tell people what it will block.

Moz said the focus of the policy is to bring the curtain down on tracking techniques that “cannot be meaningfully understood or controlled by users”.

Notoriously intrusive tracking techniques allow users to be followed and profiled around the web. Facebook planting trackers wherever a site has a “Like” button is a good example. A user without a Facebook account can still be tracked as a unique individual as they visit different news sites.

Mozilla’s policy said these “stateful identifiers are often used by third parties to associate browsing across multiple websites with the same user and to build profiles of those users, in violation of the user’s expectation”. So, out they go.

Source: Mozilla security policy cracks down on creepy web trackers, holds supercookies over fire • The Register

I’m pretty sure which browser you should be using

94% of Dutch worried about their privacy

Bescherming van de privacy is een breed gedeelde zorg. Maar liefst 94 procent van de Nederlands maakt zich zorgen over de bescherming van zijn persoonsgegevens. Een op drie mensen maakt zich zelfs veel of zeer veel zorgen. Dat blijkt uit onderzoek dat de Autoriteit Persoonsgegevens (AP) liet doen in het kader van de Dag van de Privacy.

Er zijn vooral zorgen over misbruik van (een kopie van) het identiteitsbewijs, organisaties die hun online zoekgedrag volgen en hen volgen via het wifi-signaal van hun mobiele telefoon.

Slechts 12 procent zegt wel eens gebruik te hebben gemaakt van een privacyrecht. Mensen weten volgens de toezichthouder niet hoe ze dat moeten doen, vinden het gedoe of niet belangrijk genoeg. Het recht op dataportabiliteit en het recht op een menselijke blik bij geautomatiseerde besluiten zijn de minst bekende rechten.

Gevraagd wat mensen doen als hun rechten worden geschonden, zegt 62 procent eerst contact op te nemen met de organisaties, 59 procent van de ondervraagden zegt een klacht in te dienen bij de AP.

Source: ‘Nederland maakt zich zorgen over privacy’ – Emerce

Just keep slurping: HMRC adds two million taxpayers’ voices to biometric database – but people are starting to opt-out, now that they can

HMRC’s database of Brits’ voiceprints has grown by 2 million since June – but campaign group Big Brother Watch has claimed success as 160,000 people turned the taxman’s requests down.

The Voice ID scheme, which requires taxpayers to say a key phrase that is recorded to create a digital signature, was introduced in January 2017. In the 18 months that followed, HMRC scooped up some 5.1 million people’s voiceprints this way.

Since then, another 2 million records have been collected, according to a Freedom of Information request from Big Brother Watch.

That is despite the group having challenged the lawfulness of the system in June 2018, arguing that users hadn’t been given enough information on the scheme, how to opt in or out, or details on when or how their data would be deleted.

Under the GDPR, there are certain demands on organisations that process biometric data. These require a person to give “explicit consent” that is “freely given, specific, informed and unambiguous”.

Off the back of the complaint, the Information Commissioner’s Office launched an investigation, and Big Brother Watch said the body would soon announce what action it will take.

Meanwhile, HMRC has rejigged the recording so it offers callers a clear way to opt out of the scheme – previously, as perm sec Jon Thompson admitted in September, it was not clear how users could do this.

Big Brother Watch said that this, and the publicity around the VoiceID scheme, has led to a “backlash” as people call on HMRC to delete their Voice IDs. FoI responses show 162,185 people have done so to date.

“It is a great success for us that HMRC has finally allowed taxpayers to delete their voiceprints and that so many thousands of people are reclaiming their rights by getting their Voice IDs deleted,” said the group’s director, Silkie Carlo.

Source: Just keep slurping: HMRC adds two million taxpayers’ voices to biometric database • The Register

Wow, fancy that. Web ad giant Google to block ad-blockers in Chrome. For safety, apparently

Google engineers have proposed changes to the open-source Chromium browser that will break content-blocking extensions, including various ad blockers.

Adblock Plus will most likely not be affected, though similar third-party plugins will, for reasons we will explain. The drafted changes will also limit the capabilities available to extension developers, ostensibly for the sake of speed and safety. Chromium forms the central core of Google Chrome, and, soon, Microsoft Edge.

In a note posted Tuesday to the Chromium bug tracker, Raymond Hill, the developer behind uBlock Origin and uMatrix, said the changes contemplated by the Manifest v3 proposal will ruin his ad and content blocking extensions, and take control of content away from users.

Content blockers may be used to hide or black-hole ads, but they have broader applications. They’re predicated on the notion that users, rather than anyone else, should be able to control how their browser presents and interacts with remote resources.

Manifest v3 refers to the specification for browser extension manifest files, which enumerate the resources and capabilities available to browser extensions. Google’s stated rationale for making the proposed changes, cutting off blocking plugins, is to improve security, privacy and performance, and supposedly to enhance user control.

Source: Wow, fancy that. Web ad giant Google to block ad-blockers in Chrome. For safety, apparently • The Register

uBlock origin is not only an ad blocker but also an important privacy and security tool

Google fined $57 million by French data privacy body for hiding terms and forcing users to accept intrusion or lose access

Google has been hit by a €50 million ($57 million) fine by French data privacy body CNIL (National Data Protection Commission) for failure to comply with the EU’s General Data Protection Regulation (GDPR) regulations.

The CNIL said that it was fining Google for “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization,” according to a press release issued by the organization. The news was first reported by the AFP.

[…]

The crux of the complaints leveled at Google is that it acted illegally by forcing users to accept intrusive terms or lose access to the service. This “forced consent,” it’s argued, runs contrary to the principles set out by the GDPR that users should be allowed to choose whether to allow companies to use their data. In other words, technology companies shouldn’t be allowed to adopt a “take it or leave it” approach to getting users to agree to privacy-intruding terms and conditions.

[…]

The watchdog found two core privacy violations. First, it observed that the visibility of information relating to how Google processes data, for how long it stores it, and the kinds of information it uses to personalize advertisements, is not easy to access. It found that this information was “excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information.”

So in effect, the CNIL said there was too much friction for users to find the information they need, requiring up to six separate actions to get to the information. And even when they find the information, it was “not always clear nor comprehensive.” The CNIL stated:

Users are not able to fully understand the extent of the processing operations carried out by Google. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner, and so are the categories of data processed for these various purposes.

Secondly, the CNIL said that it found that Google does not “validly” gain user consent for processing their data to use in ads personalization. Part of the problem, it said, is that the consent it collects is not done so through specific or unambiguous means — the options involve users having to click additional buttons to configure their consent, while too many boxes are pre-selected and require the user to opt out rather than opt in. Moreover, Google, the CNIL said, doesn’t provide enough granular controls for each data-processing operation.

As provided by the GDPR, consent is ‘unambiguous’ only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance).

What the CNIL is effectively referencing here is dark pattern design, which attempts to encourage users into accepting terms by guiding their choices through the design and layout of the interface. This is something that Facebook has often done too, as it has sought to garner user consent for new features or T&Cs.

Source: Google fined $57 million by French data privacy body | VentureBeat

Torrent Paradise Creates Decentralized ‘Pirate Bay’ With IPFS

The BitTorrent protocol has a decentralized nature but the ecosystem surrounding it has some weak spots. Torrent sites, for example, use centralized search engines which are prone to outages and takedowns. Torrent-Paradise tackles this problem with IPFS, a searchable torrent indexer that’s shared by the people.

IPFS, short for InterPlanetary File System, has been around for a few years now.

While the name sounds alien to most people, it has a growing userbase among the tech-savvy.

In short, IPFS is a decentralized network where users make files available among each other. If a website uses IPFS, it is served by a “swarm” of people, much like BitTorrent users do when a file is shared.

The advantage of this system is that websites can become completely decentralized. If a website or other resource is hosted with IPFS, it remains accessible as long as the computer of one user who “pinned” it remains online.

The advantages of IPFS are clear. It allows archivists, content creators, researchers, and many others to distribute large volumes of data over the Internet. It’s censorship resistant and not vulnerable to regular hosting outages.

Source: Torrent Paradise Creates Decentralized ‘Pirate Bay’ With IPFS – TorrentFreak

Europe’s controversial ‘link tax’ sent back after member states rebel – The Verge

Copyright activists just scored a major victory in the ongoing fight over the European Union’s new copyright rules. An upcoming summit to advance the EU’s copyright directive has been canceled, as member states objected to the incoming rules as too restrictive to online creators.

The EU’s forthcoming copyright rules had drawn attention from activists for two measures, designated as Article 11 and Article 13, that would give publishers rights over snippets of news content shared online (the so-called “link tax”) and increase platform liability for user content. Concerns about those two articles led to the intial proposal being voted down by the European parliament in July, but a version with new safeguards was approved the following September. Until recently, experts expected the resulting proposal to be approved by plenary vote in the coming months.

After today, the directive’s future is much less certain. Member states were gathered to approve a new version of the directive drafted by Romania — but eleven countries reportedly opposed the text, many of them citing familiar concerns over the two controversial articles. Crucially, Italy’s new populist government takes a far more skeptical view of the strict copyright proposals. Member states have until the end of February to approve a new version of the text, although it’s unclear what compromise might be reached.

Whatever rules the European Union adopts will have a profound impact on companies doing business online. In particular, Article 13 could greatly expand the legal risks of hosting user content, putting services like Facebook and YouTube in a difficult position. As Cory Doctorow described it to The Verge, “this is just ContentID on steroids, for everything.”

More broadly, Article 13 would expand platform’s liability for user-uploaded content. “If you’re a platform, then you are liable for the material which appears on your platform,” said professor Martin Kretschmer, who teaches intellectual property law at the University of Glasgow. “That’s the council position as of May, and that has huge problems.”

“Changing the copyright regime without really understanding where the problem is is foolish,” he continued.

Still, today’s vote suggests the ongoing activism against the proposals is having an effect. “Public attention to the copyright reform is having an effect,” wrote Pirate Party representative Julia Reda in a blog post. “Keeping up the pressure in the coming weeks will be more important than ever to make sure that the most dangerous elements of the new copyright proposal will be rejected.”

Source: Europe’s controversial ‘link tax’ sent back after member states rebel – The Verge

Incredible to see common sense seemingly prevailing over the interests of big money makers

NL judge says doc’s official warning needs removing from Google

An official warning by the Dutch Doctors guild to a serving doctor needs to be removed from Google’s search result, as the judge says that the privacy of the doctor is more important than the public good that arises from people being warned that this doctor has in some way misbehaved.

As a result of this landmark case, there’s a whole line of doctors requesting to be removed from Google.

Link is in Dutch.

Source: Google moet berispte arts verwijderen uit zoekmachine | TROUW

Project Alias is a DIY project that deafens your home voice assistant until you want it to listen to you

Alias is a teachable “parasite” that is designed to give users more control over their smart assistants, both when it comes to customisation and privacy. Through a simple app the user can train Alias to react on a custom wake-word/sound, and once trained, Alias can take control over your home assistant by activating it for you.

When you don’t use it, Alias will make sure the assistant is paralysed and unable to listen by interrupting its microphones.

Follow the build guide on Instructables
or get the source code on GitHub

alias_selected-9-no-wire

Alias acts as a middle-man device that is designed to appropriate any voice activated device. Equipped with speakers and a microphone, Alias is able to communicate and manipulate the home assistant when placed on top of it. The speakers of Alias are used to interrupt the assistance with a constant low noise/sound that feeds directly into the microphone of the assistant. First when Alias recognises the user created wake-word, it stops the noise and quietly activates the assistant with a sound recording of the original wake-word. From here the assistant can be used as normally.

The wake word detection is made with a small neural network that runs locally on Alias, which can be trained and modified through live examples. The app acts as a controller to reset, train and turn on/off Alias.

The way Alias manipulates the home assistance allows to create new custom functionalities and commands that the products were not originally intended for. Alias can be programmed to send any speech commands to the assistant’s speakers, which leaves us with a lot of new possibilities.

Source: Bjørn Karmann › project_alias

Amazon’s Ring Security Cameras Allow Anyone to Watch Easily – And They Do!

But for some who’ve welcomed in Amazon’s Ring security cameras, there have been more than just algorithms watching through the lens, according to sources alarmed by Ring’s dismal privacy practices.

Ring has a history of lax, sloppy oversight when it comes to deciding who has access to some of the most precious, intimate data belonging to any person: a live, high-definition feed from around — and perhaps inside — their house. The company has marketed its line of miniature cameras, designed to be mounted as doorbells, in garages, and on bookshelves, not only as a means of keeping tabs on your home while you’re away, but of creating a sort of privatized neighborhood watch, a constellation of overlapping camera feeds that will help police detect and apprehend burglars (and worse) as they approach. “Our mission to reduce crime in neighborhoods has been at the core of everything we do at Ring,” founder and CEO Jamie Siminoff wrote last spring to commemorate the company’s reported $1 billion acquisition payday from Amazon, a company with its own recent history of troubling facial recognition practices. The marketing is working; Ring is a consumer hit and a press darling.

Despite its mission to keep people and their property secure, the company’s treatment of customer video feeds has been anything but, people familiar with the company’s practices told The Intercept. Beginning in 2016, according to one source, Ring provided its Ukraine-based research and development team virtually unfettered access to a folder on Amazon’s S3 cloud storage service that contained every video created by every Ring camera around the world. This would amount to an enormous list of highly sensitive files that could be easily browsed and viewed. Downloading and sharing these customer video files would have required little more than a click. The Information, which has aggressively covered Ring’s security lapses, reported on these practices last month.

At the time the Ukrainian access was provided, the video files were left unencrypted, the source said, because of Ring leadership’s “sense that encryption would make the company less valuable,” owing to the expense of implementing encryption and lost revenue opportunities due to restricted access. The Ukraine team was also provided with a corresponding database that linked each specific video file to corresponding specific Ring customers.

“If [someone] knew a reporter or competitor’s email address, [they] could view all their cameras.””

At the same time, the source said, Ring unnecessarily provided executives and engineers in the U.S. with highly privileged access to the company’s technical support video portal, allowing unfiltered, round-the-clock live feeds from some customer cameras, regardless of whether they needed access to this extremely sensitive data to do their jobs. For someone who’d been given this top-level access — comparable to Uber’s infamous “God mode” map that revealed the movements of all passengers — only a Ring customer’s email address was required to watch cameras from that person’s home.

Source: For Owners of Amazon’s Ring Security Cameras, Strangers May Have Been Watching

Netflix password sharing may soon be impossible due to new AI tracking

A video software firm has come up with a way to prevent people from sharing their account details for Netflix and other streaming services with friends and family members.

UK-based Synamedia unveiled the artificial intelligence software at the CES 2019 technology trade show in Las Vegas, claiming it could save the streaming industry billions of dollars over the next few years.

Casual password sharing is practised by more than a quarter of millennials, according to figures from market research company Magid.

Separate figures from research firm Parks Associates predicts that by $9.9 billion (£7.7bn) of pay-TV revenues and $1.2 billion of revenue from subscription-based streaming services will be lost to credential sharing each year.

The AI system developed by Synamedia uses machine learning to analyse account activity and recognise unusual patterns, such as account details being used in two locations within similar time periods.

The idea is to spot instances of customers sharing their account credentials illegally and offering them a premium shared account service that will authorise a limited level of password sharing.

“Casual credentials sharing is becoming too expensive to ignore. Our new solution gives operators the ability to take action,” said Jean Marc Racine, Synamedia’s chief product officer.

“Many casual users will be happy to pay an additional fee for a premium, shared service with a greater number of concurrent users. It’s a great way to keep honest people honest while benefiting from an incremental revenue stream.”

Source: Netflix password sharing may soon be impossible due to new AI tracking | The Independent

I like the “keeping honest people honest” bit instead of “money grubbing firms richer”

Professor exposing unethical academic publishing is being sued by university in childish discrediting counterclaims of being unethical for showing unethical behaviour

The three authors, who describe themselves as leftists, spent 10 months writing 20 hoax papers they submitted to reputable journals in gender, race, sexuality, and related fields. Seven were accepted, four were published online, and three were in the process of being published when questions raised in October by a skeptical Wall Street Journal editorial writer forced them to halt their project.

One of their papers, about canine rape culture in dog parks in Portland, Ore., was initially recognized for excellence by the journal Gender, Place, and Culture, the authors reported.

The hoax was dubbed “Sokal Squared,” after a similar stunt pulled in 1996 by Alan Sokal, then a physicist at New York University.

After their ruse was revealed, the three authors described their project in an October article in the webzine Areo, which Pluckrose edits. Their goal, they wrote, was to “to study, understand, and expose the reality of grievance studies, which is corrupting academic research.” They contend that scholarship that tends to social grievances now dominates some fields, where students and others are bullied into adhering to scholars’ worldviews, while lax publishing standards allow the publication of clearly ludicrous articles if the topic is politically fashionable.

[…]

In November the investigating committee reported that the dog-park article contained knowingly fabricated data and thus constituted research misconduct. The review board also determined that the hoax project met the definition for human-subjects research because it involved interacting with journal editors and reviewers. Any research involving human subjects (even duped journal editors, apparently) needs IRB approval first, according to university policy.

“Your efforts to conduct human-subjects research at PSU without a submitted nor approved protocol is a clear violation of the policies of your employer,” McLellan wrote in an email to Boghossian.

The decision to move ahead with disciplinary action came after a group of faculty members published a letter in the student newspaper decrying the hoax as “lies peddled to journals, masquerading as articles.” These “lies” are designed “not to critique, educate, or inspire change in flawed systems,” they wrote, “but rather to humiliate entire fields while the authors gin up publicity for themselves without having made any scholarly contributions whatsoever.” Such behavior, they wrote, hurts the reputations of the university as well as honest scholars who work there. “Worse yet, it jeopardizes the students’ reputations, as their degrees in the process may become devalued.”

[…]

Meanwhile, within the first 24 hours of news leaking about the proceedings against him, more than 100 scholars had written letters defending Boghossian, according to his media site, which posted some of them.

Steven Pinker, a professor of psychology at Harvard University, was among the high-profile scholars who defended him. “Criticism and open debate are the lifeblood of academia; they are what differentiate universities from organs of dogma and propaganda,” Pinker wrote. “If scholars feel they have been subject to unfair criticism, they should explain why they think the critic is wrong. It should be beneath them to try to punish and silence him.”

Richard Dawkins, an evolutionary biologist, author, and professor emeritus at the University of Oxford, had this to say: “If the members of your committee of inquiry object to the very idea of satire as a form of creative expression, they should come out honestly and say so. But to pretend that this is a matter of publishing false data is so obviously ridiculous that one cannot help suspecting an ulterior motive.”

Sokal, who is now at University College London, wrote that Boghossian’s hoax had served the public interest and that the university would become a “laughingstock” in academe as well as the public sphere if it insisted that duping editors constituted research on human subjects.

One of Boghossian’s co-author, Lindsay, urged him in the video they posted to emphasize that the project amounted to an audit of certain sectors of academic research. “People inside the system aren’t allowed to question the system? What kind of Orwellian stuff is that?” Lindsay asked.

Source: Proceedings Start Against ‘Sokal Squared’ Hoax Professor – The Chronicle of Higher Education

Pots and kettles? I think it’s just the American way of getting back at someone who has made you blush – destroy at all costs!