Nostalgic social network ‘Timehop’ loses data from 21 million users

A service named “Timehop” that claims it is “reinventing reminiscing” – in part by linking posts from other social networks – probably wishes it could go back in time and reinvent its own security, because it has just confessed to losing data describing 21 million members and can’t guarantee that the perps didn’t slurp private info from users’ social media accounts.

“On July 4, 2018, Timehop experienced a network intrusion that led to a breach of some of your data,” the company wrote. “We learned of the breach while it was still in progress, and were able to interrupt it, but data was taken.”

Names and email addresses were lifted, as were “Keys that let Timehop read and show you your social media posts (but not private messages)”. Timehop has “deactivated these keys so they can no longer be used by anyone – so you’ll have to re-authenticate to our App.”

The breach also led to the loss of access tokens Timehop uses to access other social networks such as Twitter, Facebook and Instagram and the posts you’ve made there. Timehop swears blind that the tokens have been revoked and just won’t work any more.

But the company has also warned that “there was a short time window during which it was theoretically possible for unauthorized users to access those posts” but has “no evidence that this actually happened.”

It can’t be as almost-comforting on the matter of purloined phone numbers, advising that for those who shared such data with the company “It is recommended that you take additional security precautions with your cellular provider to ensure that your number cannot be ported.” Oh thanks for that, Timehop. And thanks, also, for not using two-factor authentication, because that made the crack possible. “The breach occurred because an access credential to our cloud computing environment was compromised,” the company’s admitted. “That cloud computing account had not been protected by multifactor authentication. We have now taken steps that include multifactor authentication to secure our authorization and access controls on all accounts.”

All of which leaves users in the same place as usual: with work to do, knowing that if their service providers had done their jobs properly they’d feel a lot safer.

Source: Nostalgic social network ‘Timehop’ loses data from 21 million users

‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap!

Cars are turning into computers on wheels and airplanes have become flying data centres, but this increase in power and connectivity has largely happened without designing in adequate security controls.

Improving transportation security was a major strand of the recent Cyber Week security conference in Israel. A one-day event, Speed of Light, focused on transportation cybersecurity, where Roberts served as master of ceremonies.


“Israel was here, not just a couple of companies. Israel is going, ‘We as a state, we as a country, need to understand [about transportation security]’,” Roberts said. “We need to learn.”

“In other places it’s the companies. GM is great. Ford is good. Some of the Germany companies are good. Fiat-Chrysler Group has got a lot of work to do.”

Some industries are more advanced than others at understanding cybersecurity risks, Roberts claimed. For example, awareness in the automobile industry is ahead of that found in aviation.

“Boeing is in denial. Airbus is kind of on the fence. Some of the other industries are better.”


There’s almost nothing you can do [as a user] to improve car security. The only thing you can do is go back to the garage every month for your Microsoft Patch Tuesday – updates from Ford or GM.

“You better come in once a month for your patches because if you don’t, the damn thing is not going to work.”

What about over-the-air updates? These may not always be reliable, Roberts warned.

“What happens if you’re in the middle of a dead spot? Or you’re in the middle of a developing country that doesn’t have that? What about the Toyotas that get sold to the Middle East or Far East, to countries that don’t have 4G or 5G coverage. And what happens when you move around countries?”


“I put a network sniffer on the big truck to see what it was sharing. Holy crap! The GPS, the telemetry, the tracking. There’s a lot of data this thing is sharing.

“If you turn it off you might be voiding warranties or [bypassing] security controls,” Roberts said, adding that there was also an issue about who owns the data a car generates. “Is it there to protect me or monitor me?” he mused.

Some insurance firms offer cheaper insurance to careful drivers, based on readings from telemetry devices and sensors. Roberts is dead set against this for privacy reasons. “Insurance can go to hell. For me, getting a 5 per cent discount on my insurance is not worth accepting a tracking device from an insurance company.”

Source: ‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap! • The Register

Is Facebook a publisher? In public it says no, but in court it says yes

Facebook has long had the same public response when questioned about its disruption of the news industry: it is a tech platform, not a publisher or a media company.

But in a small courtroom in California’s Redwood City on Monday, attorneys for the social media company presented a different message from the one executives have made to Congress, in interviews and in speeches: Facebook, they repeatedly argued, is a publisher, and a company that makes editorial decisions, which are protected by the first amendment.

The contradictory claim is Facebook’s latest tactic against a high-profile lawsuit, exposing a growing tension for the Silicon Valley corporation, which has long presented itself as neutral platform that does not have traditional journalistic responsibilities.

The suit, filed by an app startup, alleges that Mark Zuckerberg developed a “malicious and fraudulent scheme” to exploit users’ personal data and force rival companies out of business. Facebook, meanwhile, is arguing that its decisions about “what not to publish” should be protected because it is a “publisher”.

In court, Sonal Mehta, a lawyer for Facebook, even drew comparison with traditional media: “The publisher discretion is a free speech right irrespective of what technological means is used. A newspaper has a publisher function whether they are doing it on their website, in a printed copy or through the news alerts.”

The plaintiff, a former startup called Six4Three, first filed the suit in 2015 after Facebook removed app developers’ access to friends’ data. The company had built a controversial and ultimately failed app called Pikinis, which allowed people to filter photos to find ones with people in bikinis and other swimwear.

Six4Three attorneys have alleged that Facebook enticed developers to create apps for its platform by implying creators would have long-term access to the site’s huge amounts of valuable personal data and then later cut off access, effectively defrauding them. The case delves into some of the privacy concerns sparked by the Cambridge Analytica scandal.

Source: Is Facebook a publisher? In public it says no, but in court it says yes | Technology | The Guardian

More on how social media hacks brains to addict users

In a followup to How programmers addict you to social media, games and your mobile phone

Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’

He explained that when Facebook was being developed the objective was: “How do we consume as much of your time and conscious attention as possible?” It was this mindset that led to the creation of features such as the “like” button that would give users “a little dopamine hit” to encourage them to upload more content.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”


Parker is not the only Silicon Valley entrepreneur to express regret over the technologies he helped to develop. The former Googler Tristan Harris is one of several techies interviewed by the Guardian in October to criticize the industry.

“All of us are jacked into this system,” he said. “All of our minds can be hijacked. Our choices are not as free as we think they are.”

Aza Raskin on Google Search Results and How He Invented the Infinite Scroll

Social media apps are ‘deliberately’ addictive to users

Social media companies are deliberately addicting users to their products for financial gain, Silicon Valley insiders have told the BBC’s Panorama programme.

“It’s as if they’re taking behavioural cocaine and just sprinkling it all over your interface and that’s the thing that keeps you like coming back and back and back”, said former Mozilla and Jawbone employee Aza Raskin.

“Behind every screen on your phone, there are generally like literally a thousand engineers that have worked on this thing to try to make it maximally addicting” he added.

In 2006 Mr Raskin, a leading technology engineer himself, designed infinite scroll, one of the features of many apps that is now seen as highly habit forming. At the time, he was working for Humanized – a computer user-interface consultancy.

Image caption Aza Raskin says he did not recognise how addictive infinite scroll could be

Infinite scroll allows users to endlessly swipe down through content without clicking.

“If you don’t give your brain time to catch up with your impulses,” Mr Raskin said, “you just keep scrolling.”

He said the innovation kept users looking at their phones far longer than necessary.

Mr Raskin said he had not set out to addict people and now felt guilty about it.

But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them.

“In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up,” he said.

“So, when you put that much pressure on that one number, you’re going to start trying to invent new ways of getting people to stay hooked.”

Is My Phone Recording Everything I Say? It turns out it sends screenshots and videos of what you do

Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. For the last year, Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook.

Sorry, conspiracy theorists: They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. Like good scientists, they refuse to say that their study definitively proves that your phone isn’t secretly listening to you, but they didn’t find a single instance of it happening. Instead, they discovered a different disturbing practice: apps recording a phone’s screen and sending that information out to third parties.

Of the 17,260 apps the researchers looked at, over 9,000 had permission to access the camera and microphone and thus the potential to overhear the phone’s owner talking about their need for cat litter or about how much they love a certain brand of gelato. Using 10 Android phones, the researchers used an automated program to interact with each of those apps and then analyzed the traffic generated. (A limitation of the study is that the automated phone users couldn’t do things humans could, like creating usernames and passwords to sign into an account on an app.) They were looking specifically for any media files that were sent, particularly when they were sent to an unexpected party.

These phones played with thousands of app to see if they could find one that would secretly activate their microphone
Photo: David Choffnes (Northeastern University)

The strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. For example, when one of the phones used an app from GoPuff, a delivery start-up for people who have sudden cravings for junk food, the interaction with the app was recorded and sent to a domain affiliated with Appsee, a mobile analytics company. The video included a screen where you could enter personal information—in this case, their zip code.


In other words, until smartphone makers notify you when your screen is being recorded or give you the power to turn that ability off, you have a new thing to be paranoid about. The researchers will be presenting their work at the Privacy Enhancing Technology Symposium Conference in Barcelona next month. (While in Spain, they might want to check out the country’s most popular soccer app, which has given itself permission to access users’ smartphone mics to listen for illegal broadcasts of games in bars.)

The researchers weren’t comfortable saying for sure that your phone isn’t secretly listening to you in part because there are some scenarios not covered by their study. Their phones were being operated by an automated program, not by actual humans, so they might not have triggered apps the same way a flesh-and-blood user would. And the phones were in a controlled environment, not wandering the world in a way that might trigger them: For the first few months of the study the phones were near students in a lab at Northeastern University and thus surrounded by ambient conversation, but the phones made so much noise, as apps were constantly being played with on them, that they were eventually moved into a closet. (If the researchers did the experiment again, they would play a podcast on a loop in the closet next to the phones.) It’s also possible that the researchers could have missed audio recordings of conversations if the app transcribed the conversation to text on the phone before sending it out. So the myth can’t be entirely killed yet.

Source: Is My Phone Recording Everything I Say?

Could electrically stimulating criminals’ brains prevent crime?

A new study by a team of international researchers from the University of Pennsylvania and Nanyang Technological University suggests that electrically stimulating the prefrontal cortex can reduce the desire to carry out violent antisocial acts by over 50 percent. The research, while undeniably compelling, raises a whole host of confronting ethical questions, not just over the feasibility of actually bringing this technology into our legal system, but whether we should?

The intriguing experiment took 81 healthy adults and split them into two groups. One group received transcranial direct-current stimulation (tDCS) on the dorsolateral prefrontal cortex for 20 minutes, while the other placebo group received just 30 seconds of current and then nothing for the remaining 19 minutes.

Following the electrical stimulation all the participants were presented with two vignettes and asked to rate, from 0 to 10, how likely they would be to behave as the protagonist in the stories. One hypothetical scenario outlined a physical assault, while the other was about sexual assault. The results were fascinating, with participants receiving the tDCS reporting they would be between 47 and 70 percent less likely to carry out the violent acts compared to the blind placebo control.

“We chose our approach and behavioral tasks specifically based on our hypotheses about which brain areas might be relevant to generating aggressive intentions,” says Roy Hamilton, senior author on the study. “We were pleased to see at least some of our major predictions borne out.”


Transcranial direct-current stimulation is a little different to electroshock therapy or, more accurately, electroconvulsive therapy (ECT). Classical ECT involves significant electrical stimulation to the brain at thresholds intended to induce seizures. It is also not especially targeted, shooting electrical currents across the whole brain.

On the other hand, tDCS is much more subtle, delivering a continual low direct current to specific areas of the brain via electrodes on the head. The level of electrical current administered in tDCS sessions is often imperceptible to a subject and occasionally results in no more than a mild skin irritation.


Despite TMS being the more commonly used approach for neuromodulation in current clinical practice, perhaps tDCS is a more pragmatic and implementable form of the technology. Unlike TMS, tDCS is cheaper and easier to administer, it can often be simply engaged from home, and presents as a process that would be much more straightforward to integrate into widespread use.

Of course, the reality of what is being implied here is a lot more complicated than simply finding the most appropriate technology. Roy Hamilton quite rightly notes in relation to his new study that, “The ability to manipulate such complex and fundamental aspects of cognition and behavior from outside the body has tremendous social, ethical, and possibly someday legal implications.”


Of course, while the burgeoning field of neurolaw is grappling with what this research means for legal ideas of individual responsibility, this new study raises a whole host of complicated ethical and social questions. If a short, and non-invasive, series of targeted tDCS sessions could reduce recidivism, then should we consider using it in prisons?

“Much of the focus in understanding causes of crime has been on social causation,” says psychologist Adrian Raine, co-author on the new study. “That’s important, but research from brain imaging and genetics has also shown that half of the variance in violence can be chalked up to biological factors. We’re trying to find benign biological interventions that society will accept, and transcranial direct-current stimulation is minimal risk. This isn’t a frontal lobotomy. In fact, we’re saying the opposite, that the front part of the brain needs to be better connected to the rest of the brain.”

Italian neurosurgeon Sergio Canavero penned a controversial essay in 2014 for the journal Frontiers in Human Neuroscience arguing that non-invasive neurostimulation should be experimentally applied to criminal psychopaths and repeat offenders despite any legal or ethical dilemmas. Canavero’s argues, “it is imperative to “switch” [a criminal’s] right/wrong circuitry to a socially non-disruptive mode.”

The quite dramatic proposal is to “remodel” a criminal’s “aberrant circuits” via either a series of intermittent brain stimulation treatments or, more startlingly, through some kind of implanted intercranial electrode system than can both, electrically modulate key areas of the brain, and remotely monitor behaviorally inappropriate neurological activity.

This isn’t the first time Canavero has suggested extraordinary medical experiments. You might remember his name from his ongoing work to be the first surgeon to perform a human head transplant.


“This is not the magic bullet that’s going to wipe away aggression and crime,” says Raine. “But could transcranial direct-current stimulation be offered as an intervention technique for first-time offenders to reduce their likelihood of recommitting a violent act?”

The key question of consent is one that many researchers aren’t really grappling with. Of course, there’s no chance convicted criminals would ever be forced to undergo this kind of procedure in a future where neuromodulation is integrated into our legal system. And behavioral alterations through electrical brain stimulation would never be forced upon people who don’t comply to social norms – right?

This is the infinitely compelling brave new world of neuroscience.

Source: Could electrically stimulating criminals’ brains prevent crime?

This Sand Printer Seems Perfect for Beach Wedding Proposals

Wedding proposals are just one of the many minefields you have to navigate on social media platforms, and Ivan Miranda isn’t making things any easier. He’s designed and built an autonomous printer that can draw messages in sand, so now’s probably a good time to brace yourself for an endless barrage of “will you marry me?” beach proposals clogging up your feeds.

Miranda’s sand printer uses techniques borrowed from the classic dot-matrix printers that were a hallmark of home publishing in the ‘80s and ‘90s. An over-sized print heads travels back and forth between sets of large wheels that slowly roll the entire printer across the beach. As the print head moves, an etching tool lowers and raises to carve lines in the sand that eventually form longer messages.

It’s a slow process, especially for those of us who’ve become accustomed to speedy laser printers churning out multiple pages per minute. But the results are far more Instagram-friendly than trying to write an endearing message in the sand with a stick.

Source: This Sand Printer Seems Perfect for Beach Wedding Proposals

Europe is reading smartphones and using the data as a weapon to deport refugees

Across the continent, migrants are being confronted by a booming mobile forensics industry that specialises in extracting a smartphone’s messages, location history, and even WhatsApp data. That information can potentially be turned against the phone owners themselves.

In 2017 both Germany and Denmark expanded laws that enabled immigration officials to extract data from asylum seekers’ phones. Similar legislation has been proposed in Belgium and Austria, while the UK and Norway have been searching asylum seekers’ devices for years.

Following right-wing gains across the EU, beleaguered governments are scrambling to bring immigration numbers down. Tackling fraudulent asylum applications seems like an easy way to do that. As European leaders met in Brussels last week to thrash out a new, tougher framework to manage migration —which nevertheless seems insufficient to placate Angela Merkel’s critics in Germany— immigration agencies across Europe are showing new enthusiasm for laws and software that enable phone data to be used in deportation cases.

Admittedly, some refugees do lie on their asylum applications. Omar – not his real name – certainly did. He travelled to Germany via Greece. Even for Syrians like him there were few legal alternatives into the EU. But his route meant he could face deportation under the EU’s Dublin regulation, which dictates that asylum seekers must claim refugee status in the first EU country they arrive in. For Omar, that would mean settling in Greece – hardly an attractive destination considering its high unemployment and stretched social services.

Last year, more than 7,000 people were deported from Germany according to the Dublin regulation. If Omar’s phone were searched, he could have become one of them, as his location history would have revealed his route through Europe, including his arrival in Greece.

But before his asylum interview, he met Lena – also not her real name. A refugee advocate and businesswoman, Lena had read about Germany’s new surveillance laws. She encouraged Omar to throw his phone away and tell immigration officials it had been stolen in the refugee camp where he was staying. “This camp was well-known for crime,” says Lena, “so the story seemed believable.” His application is still pending.

Omar is not the only asylum seeker to hide phone data from state officials. When sociology professor Marie Gillespie researched phone use among migrants travelling to Europe in 2016, she encountered widespread fear of mobile phone surveillance. “Mobile phones were facilitators and enablers of their journeys, but they also posed a threat,” she says. In response, she saw migrants who kept up to 13 different SIM cards, hiding them in different parts of their bodies as they travelled.


Denmark is taking this a step further, by asking migrants for their Facebook passwords. Refugee groups note how the platform is being used more and more to verify an asylum seeker’s identity.


The Danish immigration agency confirmed they do ask asylum applicants to see their Facebook profiles. While it is not standard procedure, it can be used if a caseworker feels they need more information. If the applicant refused their consent, they would tell them they are obliged under Danish law. Right now, they only use Facebook – not Instagram or other social platforms.


“In my view, it’s a violation of ethics on privacy to ask for a password to Facebook or open somebody’s mobile phone,” says Michala Clante Bendixen of Denmark’s Refugees Welcome movement. “For an asylum seeker, this is often the only piece of personal and private space he or she has left.”

Information sourced from phones and social media offers an alternative reality that can compete with an asylum seeker’s own testimony. “They’re holding the phone to be a stronger testament to their history than what the person is ready to disclose,” says Gus Hosein, executive director of Privacy International. “That’s unprecedented.”

Privacy campaigners note how digital information might not reflect a person’s character accurately. “Because there is so much data on a person’s phone, you can make quite sweeping judgements that might not necessarily be true,” says Christopher Weatherhead, technologist at Privacy International.


Privacy International has investigated the UK police’s ability to search phones, indicating that immigration officials could possess similar powers. “What surprised us was the level of detail of these phone searches. Police could access information even you don’t have access to, such as deleted messages,” Weatherhead says.

His team found that British police are aided by Israeli mobile forensic company Cellebrite. Using their software, officials can access search history, including deleted browsing history. It can also extract WhatsApp messages from some Android phones.

Source: Europe is using smartphone data as a weapon to deport refugees | WIRED UK

Google allows outside app developers to read people’s Gmails

  • Google promised a year ago to provide more privacy to Gmail users, but The Wall Street Journal reports that hundreds of app makers have access to millions of inboxes belonging to Gmail users.
  • The outside app companies receive access to messages from Gmail users who signed up for things like price-comparison services or automated travel-itinerary planners, according to The Journal.
  • Some of these companies train software to scan the email, while others enable their workers to pore over private messages, the report says.
  • What isn’t clear from The Journal’s story is whether Google is doing anything differently than Microsoft or other rival email services.

Employees working for hundreds of software developers are reading the private messages of Gmail users, The Wall Street Journal reported on Monday.

A year ago, Google promised to stop scanning the inboxes of Gmail users, but the company has not done much to protect Gmail inboxes obtained by outside software developers, according to the newspaper. Gmail users who signed up for “email-based services” like “shopping price comparisons,” and “automated travel-itinerary planners” are most at risk of having their private messages read, The Journal reported.

Hundreds of app developers electronically “scan” inboxes of the people who signed up for some of these programs, and in some cases, employees do the reading, the paper reported. Google declined to comment.

The revelation comes at a bad time for Google and Gmail, the world’s largest email service, with 1.4 billion users. Top tech companies are under pressure in the United States and Europe to do more to protect user privacy and be more transparent about any parties with access to people’s data. The increased scrutiny follows the Cambridge Analytica scandal, in which a data firm was accused of misusing the personal information of more than 80 million Facebook users in an attempt to sway elections.

It’s not news that Google and many top email providers enable outside developers to access users’ inboxes. In most cases, the people who signed up for the price-comparison deals or other programs agreed to provide access to their inboxes as part of the opt-in process.

gmail opti-in
Gmail’s opt-in alert spells out generally what a user is agreeing to.

In Google’s case, outside developers must pass a vetting process, and as part of that, Google ensures they have an acceptable privacy agreement, The Journal reported, citing a Google representative.

What is unclear is how closely these outside developers adhere to their agreements and whether Google does anything to ensure they do, as well as whether Gmail users are fully aware that individual employees may be reading their emails, as opposed to an automated system, the report says.

Mikael Berner, the CEO of Edison Software, a Gmail developer that offers a mobile app for organizing email, told The Journal that its employees had read emails from hundreds of Gmail users as part of an effort to build a new feature. An executive at another company said employees’ reading of emails had become “common practice.”

Companies that spoke to The Journal confirmed that the practice was specified in their user agreements and said they had implemented strict rules for employees regarding the handling of email.

It’s interesting to note that, judging from The Journal’s story, very little indicates that Google is doing anything different from Microsoft or other top email providers. According to the newspaper, nothing in Microsoft or Yahoo’s policy agreements explicitly allows people to read others’ emails.

Source: Google reportedly allows outside app developers to read people’s Gmails – INSIDER

Which also shows: no one ever reads the end user agreements. I’m pretty sure no-one got the bit where it said: you are also allowing us to read all your emails when they signed up

Dear Samsung mobe owners: It may leak your private pics to randoms

Samsung’s Messages app bundled with the South Korean giant’s latest smartphones and tablets may silently send people’s private photos to random contacts, it is claimed.

An unlucky bunch of Sammy phone fans – including owners of Galaxy S9, S9+ and Note 8 gadgets – have complained on Reddit and the official support forums that the application texted their snaps without permission.

One person said the app sent their photo albums to their girlfriend at 2.30am without them knowing – there was no trace of the transfer on the phone, although it showed up in their T-Mobile US account. The pictures, like the recipients, are seemingly picked at random from handheld’s contacts, and the messages do not appear in the application’s sent box. The seemingly misbehaving app is the default messaging tool on Samsung’s Android devices.

“Last night around 2:30am, my phone sent [my girlfriend] my entire photo gallery over text but there was no record of it on my messages app,” complained one confused Galaxy S9+ owner. “However, there was record of it [in my] T-Mobile logs.”

Another S9+ punter chipped in: “Oddly enough, my wife’s phone did that last night, and mine did it the night before. I think it has something to do with the Samsung SMS app being updated from the Galaxy Store. When her phone texted me her gallery, it didn’t show up on her end – and vice versa.”

Source: Dear Samsung mobe owners: It may leak your private pics to randoms • The Register

Newer Diameter Telephony Protocol (4G / LTE) Just As Vulnerable As SS7

Security researchers say the Diameter protocol used with today’s 4G (LTE) telephony and data transfer standard is vulnerable to the same types of vulnerabilities as the older SS7 standard used with older telephony standards such as 3G, 2G, and earlier.

Both Diameter and SS7 (Signaling System No. 7) have the same role in a telephony network. Their purpose is to serve as an authentication and authorization system inside a network and between different telephony networks (providers).

SS7 was developed in the 1970s and has been proven insecure for almost two decades [1, 2, 3, 4, 5]. Because of this, starting with the rollout of 4G (LTE) networks, SS7 was replaced with the Diameter protocol, an improved inter and intra-network signaling protocol that’s also slated to be used with the upcoming 5G standard.

The difference between these two is that while SS7 did not use any type of encryption for its authentication procedures, leading to the easy forgery of authentication and authorization messages, Diameter supports TLS/DTLS (for TCP or SCTP, respectively) or IPsec.

4G operators often misconfigure Diameter

But according to research published last month by Positive Technologies detailing Diameter’s use among mobile networks across the globe, the protocol’s features are rarely used.

In practice telecom operators almost never use encryption inside the network, and only occasionally on its boundaries. Moreover, encryption is based on the peer-to-peer principle, not end-to-end. In other words, network security is built on trust between operators and IPX providers.

The incorrect use of Diameter leads to the presence of several vulnerabilities in 4G networks that resemble the ones found in older networks that use SS7, and which Diameter was supposed to prevent.

Researchers say that the Diameter misconfigurations they’ve spotted inside 4G networks are in many cases unique per each network but they usually repeat themselves to have them organized in five classes of attacks: (1) subscriber information disclosure, (2) network information disclosure, (3) subscriber traffic interception, (4) fraud, and (5) denial of service.

1+2) Subscriber and network information disclosure

The first two, subscriber and network information disclosure, allow an attacker to gather operational information about the user’s device, subscriber profile, and information about the mobile network in general.

Such vulnerabilities can reveal the user’s IMSI identifier, device addresses, network configuration, or even his geographical location —helping an attacker track users of interest as they move about.

3) Subscriber traffic interception

The third vulnerability, subscriber traffic interception, is only theoretically possible because both SMS and call transmission often establish channels with previous-generation protocols that do not use the Diameter protocol for authentication.

Nonetheless, Positive Technologies researchers warn that if the attacker is set on SMS and call interception, he can at any time downgrade a Diameter-capable 4G connection to a previous-generation connection and use flaws in SS7 and other protocols to carry out his attack.

For example, SMS interception is possible because most 4G networks send SMS messages via a 3G channel where SS7 is used instead of Diameter for user and network authentication, while phone call channels are handled via VoLTE, a protocol that has been proven insecure and susceptible to such attacks in 2015.

Even if networks handle SMS and phone calls via a pure 4G channel, then the attacker only needs to pose as an inferior network to carry out a MitM attack via an older protocol.

4) Fraud

Attackers can also use Diameter flaws to allow free use of the mobile network for a specific subscriber profile, leading to financial losses for the operator.

There are two types of such attacks, each of which is based on modifying the subscriber profile. The first type involves modifying the billing parameters stored in the subscriber profile and is quite difficult to implement in practice, since it requires knowledge of the operator’s network configuration on the part of the attacker. The values of these parameters are not standardized and depend on the specific operator; they could not be retrieved from a subscriber profile in any of the tested networks. The second type of attack is the use of services beyond restrictions, causing direct financial damage to the operator.

5) Denial of service attacks

Last but not least, Diameter flaws allow denial-of-service attacks that prevent a 4G user from accessing certain 4G features or allow an attacker to limit the speed of certain features, causing problems for a connected device.

Positive Technologies experts warn that the denial-of-service Diameter vulnerabilities “could lead to sudden failure of ATMs, payment terminals, utility meters, car alarms, and video surveillance.”

This is because these types of devices often use 4G SIM card modules to connect to their servers when located in a remote area where classic Internet connections are not possible.

All mobile networks are vulnerable to either SS7 or Diameter flaws

The cyber-security firm says that from all the mobile networks it analyzed in the past years, since it began looking into SS7 and Diameter vulnerabilities, all mobile networks it examined are vulnerable to one or another, or both, leading to unique cases where any mobile networks it inspected ws vulnerable to some sort of network-level hacking.

Diameter flaws scan results

Positive Technologies warns that with the rise of Internet of Things devices, some of which rely on 4G connections when a WiFi network is not in range, such flaws are the equivalent of having an open door for hackers to target such equipment via the 4G network.

“Such frightening consequences are only the tip of the iceberg,” experts wrote in their latest Diameter report. The company, which is known for providing security testing and monitoring of mobile networks, urges 4G operators to get with the times and invest into the security of their networks.

The “Diameter Vulnerabilities Exposure Report 2018” is available for download here. Positive Technologies previous analyzed the SS7 protocol in 2016 and the Diameter protocol in 2017.

In March 2018, ENISA (European Union Agency for Network and Information Security) published an official advisory about SS7 and Diameter vulnerabilities in modern 4G networks.

Last week, a team of academics disclosed a set of vulnerabilities in 4G (LTE) networks at the “data layer,” the one responsible for data transfer, and not the signal level where Diameter is located at.

Source: Newer Diameter Telephony Protocol Just As Vulnerable As SS7

China brings Star Wars to life with ‘laser AK-47’ that can set fire to targets a kilometre away

China has developed a new portable laser weapon that can zap a target from nearly a kilometre away, according to researchers involved in the project.

The ZKZM-500 laser assault rifle is classified as being “non-lethal” but produces an energy beam that cannot be seen by the naked eye but can pass through windows and cause the “instant carbonisation” of human skin and tissues.

Ten years ago its capabilities would have been the preserve of sci-fi films, but one laser weapons scientist said the new device is able to “burn through clothes in a split second … If the fabric is flammable, the whole person will be set on fire”.

“The pain will be beyond endurance,” according to the researcher who had took part in the development and field testing of a prototype at the Xian Institute of Optics and Precision Mechanics at the Chinese Academy of Sciences in Shaanxi province.

The 15mm calibre weapon weighs three kilos (6.6lb), about the same as an AK-47, and has a range of 800 metres, or half a mile, and could be mounted on cars, boats and planes.

It is now ready for mass production and the first units are likely to be given to anti-terrorism squads in the Chinese Armed Police.

In the event of a hostage situation it could be used to fire through windows at targets and temporarily disable the kidnappers while other units move in to rescue their captives.

It could also be used in covert military operations. The beam is powerful enough to burn through a gas tank and ignite the fuel storage facility in a military airport. If you like researching and owning guns but haven’t get in to it, this might be a bit to heavy to get, you can start with a only bb guns to feel how it is and then get one of this awesome instruments.

Because the laser has been tuned to an invisible frequency, and it produces absolutely no sound, “nobody will know where the attack came from. It will look like an accident,” another researcher said. The scientists requested not to be named due to the sensitivity of the project.

The rifles will be powered by a rechargeable lithium battery pack similar to those found in smartphones. It can fire more than 1,000 “shots”, each lasting no more than two seconds.

The prototype was built by ZKZM Laser, a technology company owned by the institute in Xian. A company representative confirmed that the firm is now seeking a partner that has a weapons production licence or a partner in the security or defence industry to start large-scale production at a cost of 100,000 yuan (US$15,000) a unit.

Source: China brings Star Wars to life with ‘laser AK-47’ that can set fire to targets a kilometre away

ProtonMail / ProtonVPN DDoS Attacks Are a Case Study of What Happens When You Mock Attackers

For the past two days, secure email provider ProtonMail has been fighting off DDoS attacks that have visibly affected the company’s services, causing short but frequent outages at regular intervals.

“The attacks went on for several hours, although the outages were far more brief, usually several minutes at a time with the longest outage on the order of 10 minutes,” a ProtonMail spokesperson said describing the attacks.

The email provider claims to “have traced the attack back to a group that claims to have ties to Russia,” a statement that some news outlets took at face value and ran stories misleading readers into thinking this was some kind of nation-state-planned cyber-attack.

But in reality, the DDoS attacks have no ties to Russia, weren’t even planned to in the first place, and the group behind the attacks denounced being Russian, to begin with.

Small hacker group behind ProtonMail DDoS attacks

Responsible for the attacks is a hacker group named Apophis Squad. In a private conversation with Bleeping Computer today, one of the group’s members detailed yesterday’s chain of events.

The Apophis member says they targeted ProtonMail at random while testing a beta version of a DDoS booter service the group is developing and preparing to launch.

The group didn’t cite any reason outside “testing” for the initial and uncalled for attack on ProtonMail, which they later revealed to have been a 200 Gbps SSDP flood, according to one of their tweets.

“After we sent the first attack, we downed it for 60 seconds,” an Apophis Squad member told us. He said the group didn’t intend to harass ProtonMail all day yesterday or today but decided to do so after ProtonMail’s CTO, Bart Butler, responded to one of their tweets calling the group “clowns.”


This was a questionable response on the part of the ProtonMail CTO, as it set the hackers against his company even more.

“So we then downed them for a few hours,” the Apophis Squad member said. Subsequent attacks included a whopping TCP-SYN flood estimated at 500 Gbps, as claimed by the group…


…and NTP and CLDAP floods, as observed by a security researcher at NASK  and confirmed by another Apophis Squad member.


The attacks also continued today when the group launched another DDoS attack consisting of a TCP-SYN flood estimated at between 50 and 70 Gbps…


… and another CHARGEN flood estimated at  2 Gbps.


Radware, the company which was involved in mitigating the attacks on ProtonMail’s infrastructure, could not confirm the 500 Gbps DDoS attack at the time of writing but confirmed the multi-vector assault.

“We can’t confirm attack size as it varied at different points in the attack,” a Radware spokesperson said. “However we can confirm that the attack was high volumetric, multi-vector attack. It included several UDP reflection attacks, multiple TCP bursts, and Syn floods.”

In addition to targeting ProtonMail, the group also targeted Tutanota, for unknown reasons, but these attacks stopped shortly after. Tutanota execs not goading the hackers might have played a role.

Hackers deny Russian connection

The Apophis Squad group is by no means a sophisticated threat. They are your typical 2018 hacker group that hangs out in Discord channels and organizes DDoS attacks for, sometimes, childish reasons.

The group is currently developing a DDoS booter service, which they were advertising prior to yesterday’s attacks on Twitter and on Discord, claiming to be able to launch DDoS attacks using protocols such as NTP, DNS, SSDP, Memcached, LDAP, HTTP, CloudFlare bypass, VSE, ARME, Torshammer, and XML-RPC.

Their Twitter timeline claims the group is based in Russia, and so does their domain, but in a private conversation the group said this wasn’t accurate.

“We aint russian [sic],” the group told us.

“We believe the attackers to be based in the UK,” a Radware spokesperson told Bleeping Computer via email today.

If the ProtonMail DDoS attack later proves to have been of 500 Gbps, it will be one of the biggest DDoS attacks recorded, following similar DDoS attacks of 1.7 Tbps (against a yet to be named US service provider) and 1.3 Tbps (against GitHub).

Source: ProtonMail DDoS Attacks Are a Case Study of What Happens When You Mock Attackers

Every Android Device Since 2012 Impacted by RAMpage Vulnerability

Almost all Android devices released since 2012 are vulnerable to a new vulnerability named RAMpage, an international team of academics has revealed today.

The vulnerability, tracked as CVE-2018-9442, is a variation of the Rowhammer attack.

Rowhammer is a hardware bug in modern memory cards. A few years back researchers discovered that when someone would send repeated write/read requests to the same row of memory cells, the write/read operations would create an electrical field that would alter data stored on nearby memory.

In the following years, researchers discovered that Rowhammer-like attacks affected personal computers, virtual machines, and Android devices. Through further researcher, they also found they could execute Rowhammer attacks via JavaScript code, GPU cards, and network packets.

RAMpage is the latest Rowhammer attack variation

The first Rowhammer attack on Android devices was named DRammer, and it could modify data on Android devices and root Android smartphones. Today, researchers expanded on that initial work.

According to a research paper published today, a team of eight academics from three universities and two private companies revealed a new Rowhammer-like attack on Android devices named RAMpage.

“RAMpage breaks the most fundamental isolation between user applications and the operating system,” researchers said. “While apps are typically not permitted to read data from other apps, a malicious program can craft a RAMpage exploit to get administrative control and get hold of secrets stored in the device.”

“This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents,” the research team said.

RAMpage may also impact Apple devices, PCs, and VMs

Research into the RAMpage vulnerability is still in its early stages, but the team says the attack can take over Android-based smartphones and tablets.

The researcher team also believes RAMpage may also affect Apple devices, home computers, or even cloud servers.

Source: Every Android Device Since 2012 Impacted by RAMpage Vulnerability

This popular Facebook app publicly exposed your data for years, the website behind the quizzes, recently fixed a flaw that publicly exposed information of their more than 120 million monthly users — even after they deleted the app. At my request, Facebook donated $8,000 to the Freedom of the Press Foundation as part of their Data Abuse Bounty Program.


While loading a test, the website would fetch my personal information and display it on the webpage. Here’s where it got my personal information from:

In theory, every website could have requested this data. Note that the data also includes a ‘token’ which gives access to all data the user authorised the application to access, such as photos, posts and friends.

I was shocked to see that this data was publicly available to any third-party that requested it.

In a normal situation, other websites would not be able to access this information. Web browsers have mechanisms in place to prevent that from happening. In this case however, the data was wrapped in something called javascript, which is an exception to this rule.

One of the basic principles of javascript is that it can be shared with other websites. Since NameTests displayed their user’s personal data in javascript file, virtually any website could access it when they would request it.

o verify it would actually be that easy to steal someone’s information, I set up a website that would connect to NameTests and get some information about my visitor. NameTests would also provide a secret key called an access token, which, depending on the permissions granted, could be used to gain access to a visitor’s posts, photos and friends. It would only take one visit to our website to gain access to someone’s personal information for up to two months.

Video proof:

An unauthorised website getting access to my Facebook information

As you can see in the video, NameTests would still reveal your identity even after deleting the app. In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since does not offer a log out functionality.

Source: This popular Facebook app publicly exposed your data for years

All-Radio 4.27 Portable Can’t Be Removed? Then Your PC is Severely Infected

Starting yesterday, there have been numerous reports of people’s Windows computers being infected with something called “All-Radio 4.27 Portable”. After researching this, it has been determined that seeing this program is a symptom of a much bigger problem on your computer.

All-Radio 4.27 Portable
All-Radio 4.27 Portable

If your computer is suddenly displaying the above program, then your computer is infected with malware that installs rootkits, miners, information-stealing Trojans, and a program that is using your computer to send send out spam.

Unfortunately, while some security programs are able to remove parts of the infection, the rootkit component needs manual removal help at this time. Due to this and the amount of malware installed, if you are infected I suggest that you reinstall Windows from scratch if possible.

If that is not an option, you can create a malware removal help topic in our Virus Removal forum in order to receive one-on-one help in cleaning your computer.

Furthermore, some of the VirusTotal scans associated with this infection have indicated that an information stealing Trojan could have been installed as well. Therefore, it is strongly suggested that you change your passwords using a clean machine if you had logged into any accounts while infected.

Source: All-Radio 4.27 Portable Can’t Be Removed? Then Your PC is Severely Infected

Adidas Reports Data Breach of a few million customers

Adidas AG ADDYY 2.03% said Thursday that a “few million” customers shopping on its U.S. website may have had their data exposed to an unauthorized party.

Neither the specific number of users affected nor the time frame of the potential breach were immediately disclosed, but the German sportswear maker said it became aware of the issue on Tuesday and has begun a forensic review.

Adidas said they are alerting “certain customers who purchased on” and that, according to the company’s preliminary examination, data affected include contact information, usernames and encrypted passwords.

“Adidas has no reason to believe that any credit card or fitness information of those consumers was impacted,” the company said.

Source: Adidas Reports Data Breach – WSJ

The International Space Station’s has a New AI-Powered Bot: CIMON

Once aboard, CIMON—short for Crew Interactive MObile companioN—will assist the crew with its many activities. The point of this pilot project is to see if an artificially intelligent bot can improve crew efficiency and morale during longer missions, including a possible mission to Mars. What’s more, activities and tasks performed by ISS crew members are starting to get more complicated, so an AI could help. CIMON doesn’t have any arms or legs, so it can’t assist with any physical tasks, but it features a language user interface, allowing crew members to verbally communicate with it. The bot can display repair instructions on its screen, and even search for objects in the ISS. With a reduced workload, astronauts will hopefully experience less stress and have more time to relax.

CIMON with its development team prior to launch.
Image: DLR

CIMON was built by Airbus under a contract awarded by the German Aerospace Center (DLR). It has 12 internal fans, which allows the bot to move in all directions as it floats in microgravity. CIMON can move freely, and perform rotational movements such as shaking its head back-and-forth in disapproval. CIMON’s AI language and comprehension system is derived from IBM’s Watson Technology, and it responds to commands in English. CIMON cost less than $6 million to build, and less than two years to develop.

The pilot project will be led by DLR astronaut Alexander Gerst, who arrived on the ISS about a month ago. CIMON is already familiar with Gerst’s face and voice, so the bot will work best with him, at least initially. The German astronaut will use CIMON to see if the bot will increase his efficiency and effectiveness as he works on various experiments.

Indeed, with CIMON floating nearby, the ISS astronauts could easily call upon the bot for assistance, which they can do by calling out its name. They can request that CIMON display documents and media in their field of view, or record and playback experiments with its onboard camera. In general, the bot should speed up tasks on the ISS that require hands-on work.

The round robot features no sharp edges, so it poses no threat to equipment or crew. Should it start to go squirrely and use it’s best HAL-9000 imitation to say something like, “I’m sorry, Alexander, I’m afraid I can’t do that,” the bot is equipped with a kill switch. But hopefully it won’t come to that; unlike HAL, CIMON has been programmed with an ISTJ personality, meaning “introverted, sensing, thinking, and judging.” Its developers chose a face to make it more personable and relatable, and it can even sense the tone of the crew’s conversation. CIMON smiles when the mood is upbeat, and frowns or cries when things are sad. It supposedly behaves like R2D2, and can even quote famous sci-fi movies like E.T. the Extra-Terrestrial.

Source: The International Space Station’s New AI-Powered Bot Is Actually Pretty Cool

Why you should not use Google Cloud – it just turns your project off with no warning and no customer support!

We have a project running in production on Google Cloud (GCP) that is used to monitor hundreds of wind turbines and scores of solar plants scattered across 8 countries. We have control centers with wall-to-wall screens with dashboards full of metrics that are monitored 24/7. Asset Managers use this system to monitor the health of individual wind turbines and solar strings in real time and take immediate corrective maintenance. Development and Forecasting teams use the system to run algorithms on data in BigQuery. All these actions translate directly to revenue. We deal in a ‘wind/solar energy’ — a perishable commodity. If we over produce, we cannot store and sell later. If we under produce, there are penalties to be paid. For this reason assets need to be monitored 24/7 to keep up/down with the needs of the power grid and the power purchase agreements made.

What happened.

Early today morning (28 June 2018) i receive an alert from Uptime Robot telling me my entire site is down. I receive a barrage of emails from Google saying there is some ‘potential suspicious activity’ and all my systems have been turned off. EVERYTHING IS OFF. THE MACHINE HAS PULLED THE PLUG WITH NO WARNING.


Customer service chat is off. There’s no phone to call. I have an email asking me to fill in a form and upload a picture of the credit card and a government issued photo id of the card holder. Great, let’s wake up the CFO who happens to be the card holder.

We will delete project within 3 business days.

“We will delete your project unless the billing owner corrects the violation by filling out the Account Verification Form within three business days. This form verifies your identity and ownership of the payment instrument. Failure to provide the requested documents may result in permanent account closure.”

What if the card holder is on leave and is unreachable for three days? We would have lost everything — years of work — millions of dollars in lost revenue.

I fill in the form with the details and thankfully within 20 minutes all the services started coming alive. The first time this happened, we were down for a few hours. In all we lost everything for about an hour. An automated email arrives apologizing for ‘inconvenience’ caused. Unfortunately The Machine has no understanding of the ‘quantum of inconvenience’ caused.


This is the first project we built entirely on the Google Cloud. All our previous works were built on AWS. In our experience AWS handles billing issues in a much more humane way. They warn you about suspicious activity and give you time to explain and sort things out. They don’t kick you down the stairs.

I hope GCP team is listening and changes things for better. Until then i’m never building any project on GCP.

Source: Why you should not use Google Cloud. – Punch a Server – Medium

Over 10,000 troops from nine nations ready to meet global challenges in Joint Expeditionary Force led by UK

With the UK at the forefront as the framework nation, the JEF can now deploy over 10,000 personnel from across the nine nations.

Speaking at the event at Lancaster House today Defence Secretary Gavin Williamson said:

Our commitment today sends a clear message to our allies and adversaries alike – our nations will stand together to meet new and conventional challenges and keep our countries and our citizens safe and secure in an uncertain world.

We are judged by the company we keep, and while the Kremlin seeks to drive a wedge between allies old and new alike, we stand with the international community united in support of international rules.

Launched in 2015, the joint force has continued to develop so that it’s able to respond rapidly, anywhere in the world, to meet global challenges and threats ranging from humanitarian assistance to conducting high intensity combat operations.

The JEF, made up of nine northern European allies Denmark, Estonia, Finland, Latvia, Lithuania, The Netherlands, Norway and Sweden, is more than a simple grouping of military capabilities. It represents the unbreakable partnership between UK and our like-minded northern European allies, born from shared operational experiences and an understanding of the threats and challenges we face today.

In May this year, the JEF demonstrated it readiness with a live capability demonstration on Salisbury Plain. It featured troops from the nine JEF nations, including troops from the UK Parachute Regiment, the Danish Jutland Dragoon Regiment, the Lithuanian “Iron Wolf” Brigade and the Latvian Mechanised Infantry Brigade, which conducted urban combat operations with air support provided by Apaches, Chinooks, Wildcats and Tornados.

Source: Over 10,000 troops from nine nations ready to meet global challenges – GOV.UK

This is not a standing force, but one where each time it is deployed is created by the countries deciding whether to (or not) add earmarked forces to the structure.

Google opens its human-sounding Duplex AI to public testing

Google is moving ahead with Duplex, the stunningly human-sounding artificial intelligence software behind its new automated system that places phone calls on your behalf with a natural-sounding voice instead of a robotic one.

The search giant said Wednesday it’s beginning public testing of the software, which debuted in May and which is designed to make calls to businesses and book appointments. Duplex instantly raised questions over the ethics and privacy implications of using an AI assistant to hold lifelike conversations for you.

Google says its plan is to start its public trial with a small group of “trusted testers” and businesses that have opted into receiving calls from Duplex. Over the “coming weeks,” the software will only call businesses to confirm business and holiday hours, such as open and close times for the Fourth of July. People will be able to start booking reservations at restaurants and hair salons starting “later this summer.”

Source: Google opens its human-sounding Duplex AI to public testing – CNET

The Discovery of Complex Organic Molecules on Saturn’s Moon Enceladus Is a Huge Deal

Using data collected by NASA’s late-great Cassini space probe, scientists have detected traces of complex organic molecules seeping out from Enceladus’ ice-covered ocean. It’s yet another sign that this intriguing Saturnian moon has what it takes to sustain life.

If life exists elsewhere in our Solar System, chances are it’s on Enceladus. The moon features a vast, warm subterranean ocean, one sandwiched between an icy crust and a rocky core. Previous research shows this ocean contains simple organic molecules, minerals, and molecular hydrogen—an important source of chemical energy. On Earth, hydrothermal processes near volcanic vents are known to sustain complex ecosystems, raising hopes that something similar is happening on Enceladus.

New research published today in Nature suggests Enceladus’ ocean also contains complex organic molecules—yet another sign that this moon contains the basic conditions and chemical ingredients to support life. Now, this isn’t proof that life exists on this icy moon, but it does show that Enceladus’ warm, soupy ocean is capable of producing complex and dynamic molecules, and the kinds of chemical reactions required to produce and sustain microbial life.


Source: The Discovery of Complex Organic Molecules on Saturn’s Moon Enceladus Is a Huge Deal

Not OK Google: Massive outage turns smart home kit utterly dumb

Google’s entire Home infrastructure has suffered a serious outage, with millions of customers on Wednesday morning complaining that their smart devices have stopped working.

At the time of writing, the cloud-connected gadgets are still hosed, the service is still down, and the system appears to have been knackered for at least the past 10 hours. The clobbbered gizmos can’t respond to voice commands, can’t control other stuff in your home, and so on.

Chromecasts can’t stream video, and Home speakers respond to commands with: “Sorry, something went wrong. Try again in a few seconds.”

Users in Google’s home state of California started complaining that their Google Home, Mini, and Chromecast devices were not working properly around midnight Pacific Time on Tuesday, and the issue cropped up in every country in which the Google Home devices are sold.

But it was only when the United States started waking up on Wednesday morning – the US has the vast majority of Google Home devices – that the reports started flooding in, pointing to an outage of the entire system.

Google has confirmed the devices are knackered, but has so far provided no other information, saying only that it is investigating the issue.


Updated to add

Google has issued the following statement:

We’re aware of an issue affecting some Google Home and Chromecast users. Some users are back online and we are working on a broader fix for all affected users. We will continue to keep our customers updated.

The web giant then followed up with more details – try rebooting to pick up a software fix, or wait up to six hours to get the update:

We’ve identified a fix for the issue impacting Google Home and Chromecast users and it will be automatically rolled out over the next 6 hours. If you would like an immediate fix please follow the directions to reboot your device. If you’re still experiencing an issue after rebooting, contact us at Google Home Support. We are really sorry for the inconvenience and are taking steps to prevent this issue from happening in the future.

Source: Not OK Google: Massive outage turns smart home kit utterly dumb • The Register

Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

You may have seen the ads that Facebook has been running on TV in a full-court press to apologize for abusing users privacy. They’re embarrassing. And, it turns out, they may be a sign of things to come. Based on a recently published patent application, Facebook could one day use ads on television to further violate your privacy once you’ve forgotten about all those other times.

First spotted by Metro, the patent is titled “broadcast content view analysis based on ambient audio recording.” (PDF) It describes a system in which an “ambient audio fingerprint or signature” that’s inaudible to the human ear could be embedded in broadcast content like a TV ad. When a hypothetical user is watching this ad, the audio fingerprint could trigger their smartphone or another device to turn on its microphone, begin recording audio and transmit data about it to Facebook.

Diagram of soundwave containing signal, triggering device, and recording ambient audio.
Image: USPTO

Everything in the patent is written in legalese and is a bit vague about what happens to the audio data. One example scenario imagines that various ambient audio would be eliminated and the content playing on the broadcast would be identified. Data would be collected about the user’s proximity to the audio. Then, the identifying information, time, and identity of the Facebook user would be sent to the social media company for further processing.

In addition to all the data users voluntarily give up, and the incidental data it collects through techniques like browser fingerprinting, Facebook would use this audio information to figure out which ads are most effective. For example, if a user walked away from the TV or changed the channel as soon as the ad began to play, it might consider the ad ineffective or on a subject the user doesn’t find interesting. If the user stays where they are and the audio is loud and clear, Facebook could compare that seeming effective ad with your other data to make better suggestions for its advertising clients.

An example of a broadcasting device communicating with the network and identifying various users in a household.
Image: USPTO

Yes, this is creepy as hell and feels like someone trying to make a patent for a peephole on a nondescript painting

Source: Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

Facebook, Google, Microsoft scolded for tricking people into spilling their private info

Five consumer privacy groups have asked the European Data Protection Board to investigate how Facebook, Google, and Microsoft design their software to see whether it complies with the General Data Protection Regulation (GDPR).

Essentially, the tech giants are accused of crafting their user interfaces so that netizens are fooled into clicking away their privacy, and handing over their personal information.

In a letter sent today to chairwoman Andrea Jelinek, the BEUC (Bureau Européen des Unions de Consommateurs), the Norwegian Consumer Council (Forbrukerrådet), Consumers International, Privacy International and ANEC (just too damn long to spell out) contend that the three tech giants “employed numerous tricks and tactics to nudge or push consumers toward giving consent to sharing as much data for as many purposes as possible.”

The letter coincides with the publication a Forbrukerrådet report, “Deceived By Design,” that claims “tech companies use dark patterns to discourage us from exercising our rights to privacy.”

Dark patterns here refers to app interface design choices that attempt to influence users to do things they may not want to do because they benefit the software maker.

The report faults Google, Facebook and, to a lesser degree, Microsoft for employing default settings that dispense with privacy. It also says they use misleading language, give users an illusion of control, conceal pro-privacy choices, offer take-it-or-leave it choices and use design patterns that make it more laborious to choose privacy.

It argues that dark patterns deprive users of control, a central requirement under GDPR.

As an example of linguistic deception, the report cites Facebook text that seeks permission to use facial recognition on images:

If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you. If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged.

The way this is worded, the report says, pushes Facebook users to accept facial recognition by suggesting there’s a risk of impersonation if they refuse. And it implies there’s something unethical about depriving those forced to use screen readers of image descriptions, a practice known as “confirmshaming.”

Source: Facebook, Google, Microsoft scolded for tricking people into spilling their private info • The Register

Skip to toolbar