‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap!

Cars are turning into computers on wheels and airplanes have become flying data centres, but this increase in power and connectivity has largely happened without designing in adequate security controls.

Improving transportation security was a major strand of the recent Cyber Week security conference in Israel. A one-day event, Speed of Light, focused on transportation cybersecurity, where Roberts served as master of ceremonies.

[…]

“Israel was here, not just a couple of companies. Israel is going, ‘We as a state, we as a country, need to understand [about transportation security]’,” Roberts said. “We need to learn.”

“In other places it’s the companies. GM is great. Ford is good. Some of the Germany companies are good. Fiat-Chrysler Group has got a lot of work to do.”

Some industries are more advanced than others at understanding cybersecurity risks, Roberts claimed. For example, awareness in the automobile industry is ahead of that found in aviation.

“Boeing is in denial. Airbus is kind of on the fence. Some of the other industries are better.”

[…]

There’s almost nothing you can do [as a user] to improve car security. The only thing you can do is go back to the garage every month for your Microsoft Patch Tuesday – updates from Ford or GM.

“You better come in once a month for your patches because if you don’t, the damn thing is not going to work.”

What about over-the-air updates? These may not always be reliable, Roberts warned.

“What happens if you’re in the middle of a dead spot? Or you’re in the middle of a developing country that doesn’t have that? What about the Toyotas that get sold to the Middle East or Far East, to countries that don’t have 4G or 5G coverage. And what happens when you move around countries?”

[…]

“I put a network sniffer on the big truck to see what it was sharing. Holy crap! The GPS, the telemetry, the tracking. There’s a lot of data this thing is sharing.

“If you turn it off you might be voiding warranties or [bypassing] security controls,” Roberts said, adding that there was also an issue about who owns the data a car generates. “Is it there to protect me or monitor me?” he mused.

Some insurance firms offer cheaper insurance to careful drivers, based on readings from telemetry devices and sensors. Roberts is dead set against this for privacy reasons. “Insurance can go to hell. For me, getting a 5 per cent discount on my insurance is not worth accepting a tracking device from an insurance company.”

Source: ‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap! • The Register

Is Facebook a publisher? In public it says no, but in court it says yes

Facebook has long had the same public response when questioned about its disruption of the news industry: it is a tech platform, not a publisher or a media company.

But in a small courtroom in California’s Redwood City on Monday, attorneys for the social media company presented a different message from the one executives have made to Congress, in interviews and in speeches: Facebook, they repeatedly argued, is a publisher, and a company that makes editorial decisions, which are protected by the first amendment.

The contradictory claim is Facebook’s latest tactic against a high-profile lawsuit, exposing a growing tension for the Silicon Valley corporation, which has long presented itself as neutral platform that does not have traditional journalistic responsibilities.

The suit, filed by an app startup, alleges that Mark Zuckerberg developed a “malicious and fraudulent scheme” to exploit users’ personal data and force rival companies out of business. Facebook, meanwhile, is arguing that its decisions about “what not to publish” should be protected because it is a “publisher”.

In court, Sonal Mehta, a lawyer for Facebook, even drew comparison with traditional media: “The publisher discretion is a free speech right irrespective of what technological means is used. A newspaper has a publisher function whether they are doing it on their website, in a printed copy or through the news alerts.”

The plaintiff, a former startup called Six4Three, first filed the suit in 2015 after Facebook removed app developers’ access to friends’ data. The company had built a controversial and ultimately failed app called Pikinis, which allowed people to filter photos to find ones with people in bikinis and other swimwear.

Six4Three attorneys have alleged that Facebook enticed developers to create apps for its platform by implying creators would have long-term access to the site’s huge amounts of valuable personal data and then later cut off access, effectively defrauding them. The case delves into some of the privacy concerns sparked by the Cambridge Analytica scandal.

Source: Is Facebook a publisher? In public it says no, but in court it says yes | Technology | The Guardian

More on how social media hacks brains to addict users

In a followup to How programmers addict you to social media, games and your mobile phone

Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’

He explained that when Facebook was being developed the objective was: “How do we consume as much of your time and conscious attention as possible?” It was this mindset that led to the creation of features such as the “like” button that would give users “a little dopamine hit” to encourage them to upload more content.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

[…]

Parker is not the only Silicon Valley entrepreneur to express regret over the technologies he helped to develop. The former Googler Tristan Harris is one of several techies interviewed by the Guardian in October to criticize the industry.

“All of us are jacked into this system,” he said. “All of our minds can be hijacked. Our choices are not as free as we think they are.”

Aza Raskin on Google Search Results and How He Invented the Infinite Scroll

Social media apps are ‘deliberately’ addictive to users

Social media companies are deliberately addicting users to their products for financial gain, Silicon Valley insiders have told the BBC’s Panorama programme.

“It’s as if they’re taking behavioural cocaine and just sprinkling it all over your interface and that’s the thing that keeps you like coming back and back and back”, said former Mozilla and Jawbone employee Aza Raskin.

“Behind every screen on your phone, there are generally like literally a thousand engineers that have worked on this thing to try to make it maximally addicting” he added.

In 2006 Mr Raskin, a leading technology engineer himself, designed infinite scroll, one of the features of many apps that is now seen as highly habit forming. At the time, he was working for Humanized – a computer user-interface consultancy.

Image caption Aza Raskin says he did not recognise how addictive infinite scroll could be

Infinite scroll allows users to endlessly swipe down through content without clicking.

“If you don’t give your brain time to catch up with your impulses,” Mr Raskin said, “you just keep scrolling.”

He said the innovation kept users looking at their phones far longer than necessary.

Mr Raskin said he had not set out to addict people and now felt guilty about it.

But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them.

“In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up,” he said.

“So, when you put that much pressure on that one number, you’re going to start trying to invent new ways of getting people to stay hooked.”

Is My Phone Recording Everything I Say? It turns out it sends screenshots and videos of what you do

Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. For the last year, Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook.

Sorry, conspiracy theorists: They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. Like good scientists, they refuse to say that their study definitively proves that your phone isn’t secretly listening to you, but they didn’t find a single instance of it happening. Instead, they discovered a different disturbing practice: apps recording a phone’s screen and sending that information out to third parties.

Of the 17,260 apps the researchers looked at, over 9,000 had permission to access the camera and microphone and thus the potential to overhear the phone’s owner talking about their need for cat litter or about how much they love a certain brand of gelato. Using 10 Android phones, the researchers used an automated program to interact with each of those apps and then analyzed the traffic generated. (A limitation of the study is that the automated phone users couldn’t do things humans could, like creating usernames and passwords to sign into an account on an app.) They were looking specifically for any media files that were sent, particularly when they were sent to an unexpected party.

These phones played with thousands of app to see if they could find one that would secretly activate their microphone
Photo: David Choffnes (Northeastern University)

The strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. For example, when one of the phones used an app from GoPuff, a delivery start-up for people who have sudden cravings for junk food, the interaction with the app was recorded and sent to a domain affiliated with Appsee, a mobile analytics company. The video included a screen where you could enter personal information—in this case, their zip code.

[…]

In other words, until smartphone makers notify you when your screen is being recorded or give you the power to turn that ability off, you have a new thing to be paranoid about. The researchers will be presenting their work at the Privacy Enhancing Technology Symposium Conference in Barcelona next month. (While in Spain, they might want to check out the country’s most popular soccer app, which has given itself permission to access users’ smartphone mics to listen for illegal broadcasts of games in bars.)

The researchers weren’t comfortable saying for sure that your phone isn’t secretly listening to you in part because there are some scenarios not covered by their study. Their phones were being operated by an automated program, not by actual humans, so they might not have triggered apps the same way a flesh-and-blood user would. And the phones were in a controlled environment, not wandering the world in a way that might trigger them: For the first few months of the study the phones were near students in a lab at Northeastern University and thus surrounded by ambient conversation, but the phones made so much noise, as apps were constantly being played with on them, that they were eventually moved into a closet. (If the researchers did the experiment again, they would play a podcast on a loop in the closet next to the phones.) It’s also possible that the researchers could have missed audio recordings of conversations if the app transcribed the conversation to text on the phone before sending it out. So the myth can’t be entirely killed yet.

Source: Is My Phone Recording Everything I Say?

Europe is reading smartphones and using the data as a weapon to deport refugees

Across the continent, migrants are being confronted by a booming mobile forensics industry that specialises in extracting a smartphone’s messages, location history, and even WhatsApp data. That information can potentially be turned against the phone owners themselves.

In 2017 both Germany and Denmark expanded laws that enabled immigration officials to extract data from asylum seekers’ phones. Similar legislation has been proposed in Belgium and Austria, while the UK and Norway have been searching asylum seekers’ devices for years.

Following right-wing gains across the EU, beleaguered governments are scrambling to bring immigration numbers down. Tackling fraudulent asylum applications seems like an easy way to do that. As European leaders met in Brussels last week to thrash out a new, tougher framework to manage migration —which nevertheless seems insufficient to placate Angela Merkel’s critics in Germany— immigration agencies across Europe are showing new enthusiasm for laws and software that enable phone data to be used in deportation cases.

Admittedly, some refugees do lie on their asylum applications. Omar – not his real name – certainly did. He travelled to Germany via Greece. Even for Syrians like him there were few legal alternatives into the EU. But his route meant he could face deportation under the EU’s Dublin regulation, which dictates that asylum seekers must claim refugee status in the first EU country they arrive in. For Omar, that would mean settling in Greece – hardly an attractive destination considering its high unemployment and stretched social services.

Last year, more than 7,000 people were deported from Germany according to the Dublin regulation. If Omar’s phone were searched, he could have become one of them, as his location history would have revealed his route through Europe, including his arrival in Greece.

But before his asylum interview, he met Lena – also not her real name. A refugee advocate and businesswoman, Lena had read about Germany’s new surveillance laws. She encouraged Omar to throw his phone away and tell immigration officials it had been stolen in the refugee camp where he was staying. “This camp was well-known for crime,” says Lena, “so the story seemed believable.” His application is still pending.

Omar is not the only asylum seeker to hide phone data from state officials. When sociology professor Marie Gillespie researched phone use among migrants travelling to Europe in 2016, she encountered widespread fear of mobile phone surveillance. “Mobile phones were facilitators and enablers of their journeys, but they also posed a threat,” she says. In response, she saw migrants who kept up to 13 different SIM cards, hiding them in different parts of their bodies as they travelled.

[…]

Denmark is taking this a step further, by asking migrants for their Facebook passwords. Refugee groups note how the platform is being used more and more to verify an asylum seeker’s identity.

[…]

The Danish immigration agency confirmed they do ask asylum applicants to see their Facebook profiles. While it is not standard procedure, it can be used if a caseworker feels they need more information. If the applicant refused their consent, they would tell them they are obliged under Danish law. Right now, they only use Facebook – not Instagram or other social platforms.

[…]

“In my view, it’s a violation of ethics on privacy to ask for a password to Facebook or open somebody’s mobile phone,” says Michala Clante Bendixen of Denmark’s Refugees Welcome movement. “For an asylum seeker, this is often the only piece of personal and private space he or she has left.”

Information sourced from phones and social media offers an alternative reality that can compete with an asylum seeker’s own testimony. “They’re holding the phone to be a stronger testament to their history than what the person is ready to disclose,” says Gus Hosein, executive director of Privacy International. “That’s unprecedented.”

Privacy campaigners note how digital information might not reflect a person’s character accurately. “Because there is so much data on a person’s phone, you can make quite sweeping judgements that might not necessarily be true,” says Christopher Weatherhead, technologist at Privacy International.

[…]

Privacy International has investigated the UK police’s ability to search phones, indicating that immigration officials could possess similar powers. “What surprised us was the level of detail of these phone searches. Police could access information even you don’t have access to, such as deleted messages,” Weatherhead says.

His team found that British police are aided by Israeli mobile forensic company Cellebrite. Using their software, officials can access search history, including deleted browsing history. It can also extract WhatsApp messages from some Android phones.

Source: Europe is using smartphone data as a weapon to deport refugees | WIRED UK

Google allows outside app developers to read people’s Gmails

  • Google promised a year ago to provide more privacy to Gmail users, but The Wall Street Journal reports that hundreds of app makers have access to millions of inboxes belonging to Gmail users.
  • The outside app companies receive access to messages from Gmail users who signed up for things like price-comparison services or automated travel-itinerary planners, according to The Journal.
  • Some of these companies train software to scan the email, while others enable their workers to pore over private messages, the report says.
  • What isn’t clear from The Journal’s story is whether Google is doing anything differently than Microsoft or other rival email services.

Employees working for hundreds of software developers are reading the private messages of Gmail users, The Wall Street Journal reported on Monday.

A year ago, Google promised to stop scanning the inboxes of Gmail users, but the company has not done much to protect Gmail inboxes obtained by outside software developers, according to the newspaper. Gmail users who signed up for “email-based services” like “shopping price comparisons,” and “automated travel-itinerary planners” are most at risk of having their private messages read, The Journal reported.

Hundreds of app developers electronically “scan” inboxes of the people who signed up for some of these programs, and in some cases, employees do the reading, the paper reported. Google declined to comment.

The revelation comes at a bad time for Google and Gmail, the world’s largest email service, with 1.4 billion users. Top tech companies are under pressure in the United States and Europe to do more to protect user privacy and be more transparent about any parties with access to people’s data. The increased scrutiny follows the Cambridge Analytica scandal, in which a data firm was accused of misusing the personal information of more than 80 million Facebook users in an attempt to sway elections.

It’s not news that Google and many top email providers enable outside developers to access users’ inboxes. In most cases, the people who signed up for the price-comparison deals or other programs agreed to provide access to their inboxes as part of the opt-in process.

gmail opti-in
Gmail’s opt-in alert spells out generally what a user is agreeing to.
Google

In Google’s case, outside developers must pass a vetting process, and as part of that, Google ensures they have an acceptable privacy agreement, The Journal reported, citing a Google representative.

What is unclear is how closely these outside developers adhere to their agreements and whether Google does anything to ensure they do, as well as whether Gmail users are fully aware that individual employees may be reading their emails, as opposed to an automated system, the report says.

Mikael Berner, the CEO of Edison Software, a Gmail developer that offers a mobile app for organizing email, told The Journal that its employees had read emails from hundreds of Gmail users as part of an effort to build a new feature. An executive at another company said employees’ reading of emails had become “common practice.”

Companies that spoke to The Journal confirmed that the practice was specified in their user agreements and said they had implemented strict rules for employees regarding the handling of email.

It’s interesting to note that, judging from The Journal’s story, very little indicates that Google is doing anything different from Microsoft or other top email providers. According to the newspaper, nothing in Microsoft or Yahoo’s policy agreements explicitly allows people to read others’ emails.

Source: Google reportedly allows outside app developers to read people’s Gmails – INSIDER

Which also shows: no one ever reads the end user agreements. I’m pretty sure no-one got the bit where it said: you are also allowing us to read all your emails when they signed up

Dear Samsung mobe owners: It may leak your private pics to randoms

Samsung’s Messages app bundled with the South Korean giant’s latest smartphones and tablets may silently send people’s private photos to random contacts, it is claimed.

An unlucky bunch of Sammy phone fans – including owners of Galaxy S9, S9+ and Note 8 gadgets – have complained on Reddit and the official support forums that the application texted their snaps without permission.

One person said the app sent their photo albums to their girlfriend at 2.30am without them knowing – there was no trace of the transfer on the phone, although it showed up in their T-Mobile US account. The pictures, like the recipients, are seemingly picked at random from handheld’s contacts, and the messages do not appear in the application’s sent box. The seemingly misbehaving app is the default messaging tool on Samsung’s Android devices.

“Last night around 2:30am, my phone sent [my girlfriend] my entire photo gallery over text but there was no record of it on my messages app,” complained one confused Galaxy S9+ owner. “However, there was record of it [in my] T-Mobile logs.”

Another S9+ punter chipped in: “Oddly enough, my wife’s phone did that last night, and mine did it the night before. I think it has something to do with the Samsung SMS app being updated from the Galaxy Store. When her phone texted me her gallery, it didn’t show up on her end – and vice versa.”

Source: Dear Samsung mobe owners: It may leak your private pics to randoms • The Register

This popular Facebook app publicly exposed your data for years

Nametests.com, the website behind the quizzes, recently fixed a flaw that publicly exposed information of their more than 120 million monthly users — even after they deleted the app. At my request, Facebook donated $8,000 to the Freedom of the Press Foundation as part of their Data Abuse Bounty Program.

[…]

While loading a test, the website would fetch my personal information and display it on the webpage. Here’s where it got my personal information from:

http://nametests.com/appconfig_user

In theory, every website could have requested this data. Note that the data also includes a ‘token’ which gives access to all data the user authorised the application to access, such as photos, posts and friends.

I was shocked to see that this data was publicly available to any third-party that requested it.

In a normal situation, other websites would not be able to access this information. Web browsers have mechanisms in place to prevent that from happening. In this case however, the data was wrapped in something called javascript, which is an exception to this rule.

One of the basic principles of javascript is that it can be shared with other websites. Since NameTests displayed their user’s personal data in javascript file, virtually any website could access it when they would request it.

o verify it would actually be that easy to steal someone’s information, I set up a website that would connect to NameTests and get some information about my visitor. NameTests would also provide a secret key called an access token, which, depending on the permissions granted, could be used to gain access to a visitor’s posts, photos and friends. It would only take one visit to our website to gain access to someone’s personal information for up to two months.

Video proof:

An unauthorised website getting access to my Facebook information

As you can see in the video, NameTests would still reveal your identity even after deleting the app. In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since NameTests.com does not offer a log out functionality.

Source: This popular Facebook app publicly exposed your data for years

Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

You may have seen the ads that Facebook has been running on TV in a full-court press to apologize for abusing users privacy. They’re embarrassing. And, it turns out, they may be a sign of things to come. Based on a recently published patent application, Facebook could one day use ads on television to further violate your privacy once you’ve forgotten about all those other times.

First spotted by Metro, the patent is titled “broadcast content view analysis based on ambient audio recording.” (PDF) It describes a system in which an “ambient audio fingerprint or signature” that’s inaudible to the human ear could be embedded in broadcast content like a TV ad. When a hypothetical user is watching this ad, the audio fingerprint could trigger their smartphone or another device to turn on its microphone, begin recording audio and transmit data about it to Facebook.

Diagram of soundwave containing signal, triggering device, and recording ambient audio.
Image: USPTO

Everything in the patent is written in legalese and is a bit vague about what happens to the audio data. One example scenario imagines that various ambient audio would be eliminated and the content playing on the broadcast would be identified. Data would be collected about the user’s proximity to the audio. Then, the identifying information, time, and identity of the Facebook user would be sent to the social media company for further processing.

In addition to all the data users voluntarily give up, and the incidental data it collects through techniques like browser fingerprinting, Facebook would use this audio information to figure out which ads are most effective. For example, if a user walked away from the TV or changed the channel as soon as the ad began to play, it might consider the ad ineffective or on a subject the user doesn’t find interesting. If the user stays where they are and the audio is loud and clear, Facebook could compare that seeming effective ad with your other data to make better suggestions for its advertising clients.

An example of a broadcasting device communicating with the network and identifying various users in a household.
Image: USPTO

Yes, this is creepy as hell and feels like someone trying to make a patent for a peephole on a nondescript painting

Source: Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

Facebook, Google, Microsoft scolded for tricking people into spilling their private info

Five consumer privacy groups have asked the European Data Protection Board to investigate how Facebook, Google, and Microsoft design their software to see whether it complies with the General Data Protection Regulation (GDPR).

Essentially, the tech giants are accused of crafting their user interfaces so that netizens are fooled into clicking away their privacy, and handing over their personal information.

In a letter sent today to chairwoman Andrea Jelinek, the BEUC (Bureau Européen des Unions de Consommateurs), the Norwegian Consumer Council (Forbrukerrådet), Consumers International, Privacy International and ANEC (just too damn long to spell out) contend that the three tech giants “employed numerous tricks and tactics to nudge or push consumers toward giving consent to sharing as much data for as many purposes as possible.”

The letter coincides with the publication a Forbrukerrådet report, “Deceived By Design,” that claims “tech companies use dark patterns to discourage us from exercising our rights to privacy.”

Dark patterns here refers to app interface design choices that attempt to influence users to do things they may not want to do because they benefit the software maker.

The report faults Google, Facebook and, to a lesser degree, Microsoft for employing default settings that dispense with privacy. It also says they use misleading language, give users an illusion of control, conceal pro-privacy choices, offer take-it-or-leave it choices and use design patterns that make it more laborious to choose privacy.

It argues that dark patterns deprive users of control, a central requirement under GDPR.

As an example of linguistic deception, the report cites Facebook text that seeks permission to use facial recognition on images:

If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you. If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged.

The way this is worded, the report says, pushes Facebook users to accept facial recognition by suggesting there’s a risk of impersonation if they refuse. And it implies there’s something unethical about depriving those forced to use screen readers of image descriptions, a practice known as “confirmshaming.”

Source: Facebook, Google, Microsoft scolded for tricking people into spilling their private info • The Register

EU breaks internet, starts wholesale censorship for rich man copyright holders

The problems are huge, not least because the EU will implement an automated content filter, which means that memes will die, but also, if you have the money to spam the system with requests, you can basically kill any content you want with the actual content holder only having a marginal chance of navigating EU burocracy in order to regain ownership of their rights.

There goes free speech and innovation.

 

Source: COM_2016_0593_FIN.ENG.xhtml.1_EN_ACT_part1_v5.docx

Red Shell packaged games (Civ VI, Total War, ESO, KSP and more) contain a spyware which tracks your Internet activity outside of the game

Red shell is a Spyware that tracks data of your PC and shares it with 3rd parties. On their website they formulate it all in very harmless language, but the fact is that this is software from someone i don’t trust and whom i never invited, which is looking at my data and running on my pc against my will. This should have no place in a full price PC game, and in no games if it were up to me.

I make this thread to raise awareness of these user unfriendly marketing practices and data mining software that are common on the mobile market, and which are flooding over to our PC Games market. As a person and a gamer i refuse to be data mined. My data is my own and you have no business making money of it.

The announcement yesterday was only from “Holy Potatoes! We’re in Space?!”, but i would consider all their games as on risk to contain that spyware if they choose to include it again, with or without announcement. Also the Publisher of this one title is Daedalic Entertainment, while the others are self published. I would think it could be interesting to check if other Daedalic Entertainment Games have that spyware in it as well. I had no time to do that.

Reddit [PSA] RED SHELL Spyware – “Holy Potatoes! We’re in Space?!” integrated and removed it after complaints

and
[PSA] Civ VI, Total War, ESO, KSP and more contain a spyware which tracks your Internet activity outside of the game (x-post r/Steam)

Addresses to block:
redshell.io
api.redshell.io
treasuredata.com
api.treasuredata.com

EU Copyright law could put end to net memes

Memes, remixes and other user-generated content could disappear online if the EU’s proposed rules on copyright become law, warn experts.

Digital rights groups are campaigning against the Copyright Directive, which the European Parliament will vote on later this month.

The legislation aims to protect rights-holders in the internet age.

But critics say it misunderstands the way people engage with web content and risks excessive censorship.

The Copyright Directive is an attempt to reshape copyright for the internet, in particular rebalancing the relationship between copyright holders and online platforms.

Article 13 states that platform providers should “take measures to ensure the functioning of agreements concluded with rights-holders for the use of their works”.

Critics say this will, in effect, require all internet platforms to filter all content put online by users, which many believe would be an excessive restriction on free speech.

There is also concern that the proposals will rely on algorithms that will be programmed to “play safe” and delete anything that creates a risk for the platform.

A campaign against Article 13 – Copyright 4 Creativity – said that the proposals could “destroy the internet as we know it”.

“Should Article 13 of the Copyright Directive be adopted, it will impose widespread censorship of all the content you share online,” it said.

It is urging users to write to their MEP ahead of the vote on 20 June.

Jim Killock, executive director of the UK’s Open Rights Group, told the BBC: “Article 13 will create a ‘Robo-copyright’ regime, where machines zap anything they identify as breaking copyright rules, despite legal bans on laws that require ‘general monitoring’ of users to protect their privacy.

“Unfortunately, while machines can spot duplicate uploads of Beyonce songs, they can’t spot parodies, understand memes that use copyright images, or make any kind of cultural judgement about what creative people are doing. We see this all too often on YouTube already.

Source: Copyright law could put end to net memes – BBC News

Facebook gave some companies special access to data on users’ friends

Facebook granted a select group of companies special access to its users’ records even after the point in 2015 that the company has claimed it stopped sharing such data with app developers.

According to the Wall Street Journal, which cited court documents, unnamed Facebook officials and other unnamed sources, Facebook made special agreements with certain companies called “whitelists,” which gave them access to extra information about a user’s friends. This includes data such as phone numbers and “friend links,” which measure the degree of closeness between users and their friends.

These deals were made separately from the company’s data-sharing agreements with device manufacturers such as Huawei, which Facebook disclosed earlier this week after a New York Times report on the arrangement.

Source: Facebook gave some companies special access to data on users’ friends

The hits keep coming for Facebook: Web giant made 14m people’s private posts public

about 14 million people were affected by a bug that, for a nine-day span between May 18 and 27, caused profile posts to be set as public by default, allowing any Tom, Dick or Harriet to view the material.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” Facebook chief privacy officer Erin Egan said in a statement to The Register.

Source: The hits keep coming for Facebook: Web giant made 14m people’s private posts public • The Register

How programmers addict you to social media, games and your mobile phone

If you look at the current climate, the largest companies are the ones that hook you into their channel, whether it is a game, a website, shopping or social media. Quite a lot of research has been done in to how much time we spend watching TV and looking at our mobiles, showing differing numbers, all of which are surprisingly high. The New York Post says Americans check their phones 80 times per day, The Daily Mail says 110 times, Inc has a study from Qualtrics and Accel with 150 times and Business Insider has people touching their phones 2617 times per day.

This is nurtured behaviour and there is quite a bit of study on how they do this exactly

Social Networking Sites and Addiction: Ten Lessons Learned (academic paper)
Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided.

Hooked: How to Build Habit-Forming Products (Book)
Why do some products capture widespread attention while others flop? What makes us engage with certain products out of sheer habit? Is there a pattern underlying how technologies hook us?

Nir Eyal answers these questions (and many more) by explaining the Hook Model—a four-step process embedded into the products of many successful companies to subtly encourage customer behavior. Through consecutive “hook cycles,” these products reach their ultimate goal of bringing users back again and again without depending on costly advertising or aggressive messaging.

7 Ways Facebook Keeps You Addicted (and how to apply the lessons to your products) (article)

One of the key reasons for why it is so addictive is “operant conditioning”. It is based upon the scientific principle of variable rewards, discovered by B. F. Skinner (an early exponent of the school of behaviourism) in the 1930’s when performing experiments with rats.

The secret?

Not rewarding all actions but only randomly.

Most of our emails are boring business emails and occasionally we find an enticing email that keeps us coming back for more. That’s variable reward.

That’s one way Facebook creates addiction

The Secret Ways Social Media Is Built for Addiction

On February 9, 2009, Facebook introduced the Like button. Initially, the button was an innocent thing. It had nothing to do with hijacking the social reward systems of a user’s brain.

“The main intention I had was to make positivity the path of least resistance,” explains Justin Rosenstein, one of the four Facebook designers behind the button. “And I think it succeeded in its goals, but it also created large unintended negative side effects. In a way, it was too successful.”

Today, most of us reach for Snapchat, Instagram, Facebook, or Twitter with one vague hope in mind: maybe someone liked my stuff. And it’s this craving for validation, experienced by billions around the globe, that’s currently pushing platform engagement in ways that in 2009 were unimaginable. But more than that, it’s driving profits to levels that were previously impossible.

“The attention economy” is a relatively new term. It describes the supply and demand of a person’s attention, which is the commodity traded on the internet. The business model is simple: the more attention a platform can pull, the more effective its advertising space becomes, allowing it to charge advertisers more.

Behavioral Game Design (article)

Every computer game is designed around the same central element: the player. While the hardware and software for games may change, the psychology underlying how players learn and react to the game is a constant. The study of the mind has actually come up with quite a few findings that can inform game design, but most of these have been published in scientific journals and other esoteric formats inaccessible to designers. Ironically, many of these discoveries used simple computer games as tools to explore how people learn and act under different conditions.

The techniques that I’ll discuss in this article generally fall under the heading of behavioral psychology. Best known for the work done on animals in the field, behavioral psychology focuses on experiments and observable actions. One hallmark of behavioral research is that most of the major experimental discoveries are species-independent and can be found in anything from birds to fish to humans. What behavioral psychologists look for (and what will be our focus here) are general “rules” for learning and for how minds respond to their environment. Because of the species- and context-free nature of these rules, they can easily be applied to novel domains such as computer game design. Unlike game theory, which stresses how a player should react to a situation, this article will focus on how they really do react to certain stereotypical conditions.

What is being offered here is not a blueprint for perfect games, it is a primer to some of the basic ways people react to different patterns of rewards. Every computer game is implicitly asking its players to react in certain ways. Psychology can offer a framework and a vocabulary for understanding what we are already telling our players.

5 Creepy Ways Video Games Are Trying to Get You Addicted (article)

The Slot Machine in Your Pocket (brilliant article!)

When we get sucked into our smartphones or distracted, we think it’s just an accident and our responsibility. But it’s not. It’s also because smartphones and apps hijack our innate psychological biases and vulnerabilities.

I learned about our minds’ vulnerabilities when I was a magician. Magicians start by looking for blind spots, vulnerabilities and biases of people’s minds, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano. And this is exactly what technology does to your mind. App designers play your psychological vulnerabilities in the race to grab your attention.

I want to show you how they do it, and offer hope that we have an opportunity to demand a different future from technology companies.

If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.

There is also a backlash to this movement.

How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

Humantech.com

Technology is hijacking our minds and society.

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Since 2013, we’ve raised awareness of the problem within tech companies and for millions of people through broad media attention, convened top industry executives, and advised political leaders. Building on this start, we are advancing thoughtful solutions to change the system.

Why is this problem so urgent?

Technology that tears apart our common reality and truth, constantly shreds our attention, or causes us to feel isolated makes it impossible to solve the world’s other pressing problems like climate change, poverty, and polarization.

No one wants technology like that. Which means we’re all actually on the same team: Team Humanity, to realign technology with humanity’s best interests.

What is Time Well Spent (Part I): Design Distinctions

With Time Well Spent, we want technology that cares about helping us spend our time, and our lives, well – not seducing us into the most screen time, always-on interruptions or distractions.

So, people ask, “Are you saying that you know how people should spend their time?” Of course not. Let’s first establish what Time Well Spent isn’t:

It is not a universal, normative view of how people should spend their time
It is not saying that screen time is bad, or that we should turn it all off.
It is not saying that specific categories of apps (like social media or games) are bad.

You know that silly fear about Alexa recording everything and leaking it online? It just happened

It’s time to break out your “Alexa, I Told You So” banners – because a Portland, Oregon, couple received a phone call from one of the husband’s employees earlier this month, telling them she had just received a recording of them talking privately in their home.

“Unplug your Alexa devices right now,” the staffer told the couple, who did not wish to be fully identified, “you’re being hacked.”

At first the couple thought it might be a hoax call. However, the employee – over a hundred miles away in Seattle – confirmed the leak by revealing the pair had just been talking about their hardwood floors.

The recording had been sent from the couple’s Alexa-powered Amazon Echo to the employee’s phone, who is in the husband’s contacts list, and she forwarded the audio to the wife, Danielle, who was amazed to hear herself talking about their floors. Suffice to say, this episode was unexpected. The couple had not instructed Alexa to spill a copy of their conversation to someone else.

[…]

According to Danielle, Amazon confirmed that it was the voice-activated digital assistant that had recorded and sent the file to a virtual stranger, and apologized profusely, but gave no explanation for how it may have happened.

“They said ‘our engineers went through your logs, and they saw exactly what you told us, they saw exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes and he said we really appreciate you bringing this to our attention, this is something we need to fix!”

She said she’d asked for a refund for all their Alexa devices – something the company has so far demurred from agreeing to.

Alexa, what happened? Sorry, I can’t respond to that right now

We asked Amazon for an explanation, and today the US giant responded confirming its software screwed up:

Amazon takes privacy very seriously. We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.

For this to happen, something has gone very seriously wrong with the Alexa device’s programming.

The machines are designed to constantly listen out for the “Alexa” wake word, filling a one-second audio buffer from its microphone at all times in anticipation of a command. When the wake word is detected in the buffer, it records what is said until there is a gap in the conversation, and sends the audio to Amazon’s cloud system to transcribe, figure out what needs to be done, and respond to it.

[…]

A spokesperson for Amazon has been in touch with more details on what happened during the Alexa Echo blunder, at least from their point of view. We’re told the device misheard its wake-up word while overhearing the couple’s private chat, started processing talk of wood floorings as commands, and it all went downhill from there. Here is Amazon’s explanation:

The Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Source: You know that silly fear about Alexa recording everything and leaking it online? It just happened • The Register

Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data

Google is being sued in the high court for as much as £3.2bn for the alleged “clandestine tracking and collation” of personal information from 4.4 million iPhone users in the UK.

The collective action is being led by former Which? director Richard Lloyd over claims Google bypassed the privacy settings of Apple’s Safari browser on iPhones between August 2011 and February 2012 in order to divide people into categories for advertisers.

At the opening of an expected two-day hearing in London on Monday, lawyers for Lloyd’s campaign group Google You Owe Us told the court information collected by Google included race, physical and mental heath, political leanings, sexuality, social class, financial, shopping habits and location data.

Hugh Tomlinson QC, representing Lloyd, said information was then “aggregated” and users were put into groups such as “football lovers” or “current affairs enthusiasts” for the targeting of advertising.

Tomlinson said the data was gathered through “clandestine tracking and collation” of browsing on the iPhone, known as the “Safari Workaround” – an activity he said was exposed by a PhD researcher in 2012. Tomlinson said Google has already paid $39.5m to settle claims in the US relating to the practice. Google was fined $22.5m for the practice by the US Federal Trade Commission in 2012 and forced to pay $17m to 37 US states.

Speaking ahead of the hearing, Lloyd said: “I believe that what Google did was quite simply against the law.

“Their actions have affected millions in England and Wales and we’ll be asking the judge to ensure they are held to account in our courts.”

The campaign group hopes to win at least £1bn in compensation for an estimated 4.4 million iPhone users. Court filings show Google You Owe Us could be seeking as much as £3.2bn, meaning claimants could receive £750 per individual if successful.

Google contends the type of “representative action” being brought against it by Lloyd is unsuitable and should not go ahead. The company’s lawyers said there is no suggestion the Safari Workaround resulted in any information being disclosed to third parties.

Source: Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data | Technology | The Guardian

Note: Google does not contest the Safari Workaround though

Teensafe spying app leaked thousands of user passwords

At least one server used by an app for parents to monitor their teenagers’ phone activity has leaked tens of thousands of accounts of both parents and children.

The mobile app, TeenSafe, bills itself as a “secure” monitoring app for iOS and Android, which lets parents view their child’s text messages and location, monitor who they’re calling and when, access their web browsing history, and find out which apps they have installed.

Although teen monitoring apps are controversial and privacy-invasive, the company says it doesn’t require parents to obtain the consent of their children.

But the Los Angeles, Calif.-based company left its servers, hosted on Amazon’s cloud, unprotected and accessible by anyone without a password.

Source: Teen phone monitoring app leaked thousands of user passwords | ZDNet

Which basically means that other than nasty parents spying in on their children, anyone else was doing so also.

Google Removes ‘Don’t Be Evil’ Clause From Its Code of Conduct

Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.

“Don’t be evil” has been part of the company’s corporate code of conduct since 2000. When Google was reorganized under a new parent company, Alphabet, in 2015, Alphabet assumed a slightly adjusted version of the motto, “do the right thing.” However, Google retained its original “don’t be evil” language until the past several weeks. The phrase has been deeply incorporated into Google’s company culture—so much so that a version of the phrase has served as the wifi password on the shuttles that Google uses to ferry its employees to its Mountain View headquarters, sources told Gizmodo.

[…]

Despite this significant change, Google’s code of conduct says it has not been updated since April 5, 2018.

The updated version of Google’s code of conduct still retains one reference to the company’s unofficial motto—the final line of the document is still: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Source: Google Removes ‘Don’t Be Evil’ Clause From Its Code of Conduct

Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

Source: Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site — Krebs on Security

Scarily this means it can still be used to track anyone if you’re willing to pay for the service.

UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

As face recognition in public places becomes more commonplace, Big Brother Watch is especially concerned with false identification. In May, South Wales Police revealed that its face-recognition software had erroneously flagged thousands of attendees of a soccer game as a match for criminals; 92 percent of the matches were wrong. In a statement to the BBC, Matt Jukes, the chief constable in South Wales, said “we need to use technology when we’ve got tens of thousands of people in those crowds to protect everybody, and we are getting some great results from that.”

If someone is misidentified as a criminal or flagged, police may engage and ask for further identification. Big Brother Watch argues that this amounts to “hidden identity checks” that require people to “prove their identity and thus their innocence.” 110 people were stopped at the event after being flagged, leading to 15 arrests.

Simply walking through a crowd could lead to an identity check, but it doesn’t end there. South Wales reported more than 2,400 “matches” between May 2017 and March 2018, but ultimately made only 15 connecting arrests. The thousands of photos taken, however, are still stored in the system, with the overwhelming majority of people having no idea they even had their photo taken.

Source: UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

Facebook admits it does track non-users, for their own good

Facebook’s apology-and-explanation machine grinds on, with The Social Network™ posting detail on one of its most controversial activities – how it tracks people who don’t use Facebook.

The company explained that the post is a partial response to questions CEO Mark Zuckerberg was unable to answer during his senate and Congressional hearings.

It’s no real surprise that someone using their Facebook Login to sign in to other sites is tracked, but the post by product management director David Baser goes into (a little) detail on other tracking activities – some of which have been known to the outside world for some time, occasionally denied by Facebook, and apparently mysteries only to Zuck.

When non-Facebook sites add a “Like” button (a social plugin, in Baser’s terminology), visitors to those sites are tracked: Facebook gets their IP address, browser and OS fingerprint, and visited site.

If that sounds a bit like the datr cookie dating from 2011, you wouldn’t be far wrong.

Facebook denied non-user tracking until 2015, at which time it emphasised that it was only gathering non-users’ interactions with Facebook users. That explanation didn’t satisfy everyone, which was why The Social Network™ was told to quit tracking Belgians who haven’t signed on earlier this year.

Source: Facebook admits it does track non-users, for their own good • The Register

The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

Plenty of people have voluntarily uploaded their DNA to GEDmatch and other databases, often with real names and contact information. It’s what you do if you’re an adopted kid looking for a long-lost parent, or a genealogy buff curious about whether you have any cousins still living in the old country. GEDmatch requires that you make your DNA data public if you want to use their comparison tools, although you don’t have to attach your real name. And they’re not the only database that has helped law enforcement track people down without their knowledge.

How DNA Databases Help Track People Down

We don’t know exactly what samples or databases were used in the Golden State Killer’s case; the Sacramento County District Attorney’s office gave very little information and hasn’t confirmed any further details. But here are some things that are possible.

Y chromosome data can lead to a good guess at an unknown person’s last name.

Cis men typically have an X and a Y chromosome, and cis women two X’s. That means the Y chromosome is passed down from genetic males to their offspring—for example, from father to son. Since last names are also often handed down the same way, in many families you’ll share a surname with anybody who shares your Y chromosome.

A 2013 Science paper described how a small amount of Y chromosome data should be enough to identify surnames for an estimated 12 percent of white males in the US. (That method would find the wrong surname for 5 percent, and the rest would come back as unknown.) As more people upload their information to public databases, the authors warned, the success rate will only increase.

This is exactly the technique that genealogical consultant Colleen Fitzpatrick used to narrow down a pool of suspects in an Arizona cold case. She seems to have used short tandem repeat (STR) data from the suspect’s Y chromosome to search the Family Tree DNA database, and she saw the name Miller in the results.

The police already had a long list of suspects in the Arizona case, but based on that tip they zeroed in on one with the last name Miller. As with the Golden State Killer case, police confirmed the DNA match by obtaining a fresh DNA sample directly from their subject—the Sacramento office said they got it from something he discarded. (Yes, this is legal, and it can be an item as ordinary as a used drinking straw.)

The authors of the Science paper point out that surname, location, and year of birth are often enough to find an individual in census data.

 SNP files can find family trees.

When you download your “raw data” after mailing in a 23andme or Ancestry test, what you get is a list of locations on your genome (called SNPs, for single nucleotide polymorphisms) and two letters indicating your status for each. For example, at a certain SNP you may have inherited an A from one parent and a G from the other.

Genetic testing sites will have tools to compare your DNA with others in their database, but you can also download your raw data and submit it to other sites, including GEDmatch or Family Tree DNA. (23andme and Ancestry allow you to download your data, but they don’t accept uploads.)

But you don’t have to send a spit sample to one of those companies to get a raw data file. The DNA Doe project describes how they sequenced the whole genome of an unidentified girl from a cold case and used that data to construct a SNP file to upload to GEDmatch. They found someone with enough of the same SNPs that they were probably a close cousin. That cousin also had an account at Ancestry, where they had filled out a family tree with details of their family members. The tree included an entry for a cousin of the same age as the unidentified girl, and whose death date was listed as “missing—presumed dead.” It was her.

Your DNA Is Not Just Yours

When you send in a spit sample, or upload a raw data file, you may only be thinking about your own privacy. I have nothing to hide, you might tell yourself. Who cares if somebody finds out that I have blue eyes or a predisposition to heart disease?

But half of your DNA belongs to your biological mother, and half to your biological father. Another half—cut a different way—belongs to each of your children. On average, you share half your DNA with a sibling, and a quarter with a half-sibling, grandparent, aunt, uncle, niece or nephew. You share about an eighth with a first cousin, and so on. The more of your extended family who are into genealogy, the more likely you are to have your DNA in a public database, already contributed by a relative.

In the cases we mention here, the breakthrough came when DNA was matched, through a public database, to a person’s real name. But your DNA is, in a sense, your most identifying information.

For some cases, it may not matter whether your name is attached. Facebook reportedly spoke with a hospital about exchanging anonymized data. They didn’t need names because they had enough information, and good enough algorithms, that they thought they could identify individuals based on everything else. (Facebook doesn’t currently collect DNA information, thank god. There is a public DNA project that signs people up using a Facebook app, but they say they don’t pass the data to Facebook itself.)

And remember that 2013 study about tracking down people’s surnames? They grabbed whole-genome data from a few high-profile people who had made theirs public, and showed that the DNA files were sometimes enough information to track down an individual’s full name. It may be impossible for DNA to be totally anonymous.

Can You Protect Your Privacy While Using DNA Databases?

If you’re very concerned about privacy, you’re best off not using any of these databases. But you can’t control whether your relatives use them, and you may be looking for a long-lost family member and thus want to be in a database while minimizing the risks.

Source: The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too