Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. For the last year, Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook.
Sorry, conspiracy theorists: They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. Like good scientists, they refuse to say that their study definitively proves that your phone isn’t secretly listening to you, but they didn’t find a single instance of it happening. Instead, they discovered a different disturbing practice: apps recording a phone’s screen and sending that information out to third parties.
Of the 17,260 apps the researchers looked at, over 9,000 had permission to access the camera and microphone and thus the potential to overhear the phone’s owner talking about their need for cat litter or about how much they love a certain brand of gelato. Using 10 Android phones, the researchers used an automated program to interact with each of those apps and then analyzed the traffic generated. (A limitation of the study is that the automated phone users couldn’t do things humans could, like creating usernames and passwords to sign into an account on an app.) They were looking specifically for any media files that were sent, particularly when they were sent to an unexpected party.
These phones played with thousands of app to see if they could find one that would secretly activate their microphone
Photo: David Choffnes (Northeastern University)
The strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. For example, when one of the phones used an app from GoPuff, a delivery start-up for people who have sudden cravings for junk food, the interaction with the app was recorded and sent to a domain affiliated with Appsee, a mobile analytics company. The video included a screen where you could enter personal information—in this case, their zip code.
The researchers weren’t comfortable saying for sure that your phone isn’t secretly listening to you in part because there are some scenarios not covered by their study. Their phones were being operated by an automated program, not by actual humans, so they might not have triggered apps the same way a flesh-and-blood user would. And the phones were in a controlled environment, not wandering the world in a way that might trigger them: For the first few months of the study the phones were near students in a lab at Northeastern University and thus surrounded by ambient conversation, but the phones made so much noise, as apps were constantly being played with on them, that they were eventually moved into a closet. (If the researchers did the experiment again, they would play a podcast on a loop in the closet next to the phones.) It’s also possible that the researchers could have missed audio recordings of conversations if the app transcribed the conversation to text on the phone before sending it out. So the myth can’t be entirely killed yet.
A new study by a team of international researchers from the University of Pennsylvania and Nanyang Technological University suggests that electrically stimulating the prefrontal cortex can reduce the desire to carry out violent antisocial acts by over 50 percent. The research, while undeniably compelling, raises a whole host of confronting ethical questions, not just over the feasibility of actually bringing this technology into our legal system, but whether we should?
The intriguing experiment took 81 healthy adults and split them into two groups. One group received transcranial direct-current stimulation (tDCS) on the dorsolateral prefrontal cortex for 20 minutes, while the other placebo group received just 30 seconds of current and then nothing for the remaining 19 minutes.
Following the electrical stimulation all the participants were presented with two vignettes and asked to rate, from 0 to 10, how likely they would be to behave as the protagonist in the stories. One hypothetical scenario outlined a physical assault, while the other was about sexual assault. The results were fascinating, with participants receiving the tDCS reporting they would be between 47 and 70 percent less likely to carry out the violent acts compared to the blind placebo control.
“We chose our approach and behavioral tasks specifically based on our hypotheses about which brain areas might be relevant to generating aggressive intentions,” says Roy Hamilton, senior author on the study. “We were pleased to see at least some of our major predictions borne out.”
[…]
Transcranial direct-current stimulation is a little different to electroshock therapy or, more accurately, electroconvulsive therapy (ECT). Classical ECT involves significant electrical stimulation to the brain at thresholds intended to induce seizures. It is also not especially targeted, shooting electrical currents across the whole brain.
On the other hand, tDCS is much more subtle, delivering a continual low direct current to specific areas of the brain via electrodes on the head. The level of electrical current administered in tDCS sessions is often imperceptible to a subject and occasionally results in no more than a mild skin irritation.
[…]
Despite TMS being the more commonly used approach for neuromodulation in current clinical practice, perhaps tDCS is a more pragmatic and implementable form of the technology. Unlike TMS, tDCS is cheaper and easier to administer, it can often be simply engaged from home, and presents as a process that would be much more straightforward to integrate into widespread use.
Of course, the reality of what is being implied here is a lot more complicated than simply finding the most appropriate technology. Roy Hamilton quite rightly notes in relation to his new study that, “The ability to manipulate such complex and fundamental aspects of cognition and behavior from outside the body has tremendous social, ethical, and possibly someday legal implications.”
[…]
Of course, while the burgeoning field of neurolaw is grappling with what this research means for legal ideas of individual responsibility, this new study raises a whole host of complicated ethical and social questions. If a short, and non-invasive, series of targeted tDCS sessions could reduce recidivism, then should we consider using it in prisons?
“Much of the focus in understanding causes of crime has been on social causation,” says psychologist Adrian Raine, co-author on the new study. “That’s important, but research from brain imaging and genetics has also shown that half of the variance in violence can be chalked up to biological factors. We’re trying to find benign biological interventions that society will accept, and transcranial direct-current stimulation is minimal risk. This isn’t a frontal lobotomy. In fact, we’re saying the opposite, that the front part of the brain needs to be better connected to the rest of the brain.”
Italian neurosurgeon Sergio Canavero penned a controversial essay in 2014 for the journal Frontiers in Human Neuroscience arguing that non-invasive neurostimulation should be experimentally applied to criminal psychopaths and repeat offenders despite any legal or ethical dilemmas. Canavero’s argues, “it is imperative to “switch” [a criminal’s] right/wrong circuitry to a socially non-disruptive mode.”
The quite dramatic proposal is to “remodel” a criminal’s “aberrant circuits” via either a series of intermittent brain stimulation treatments or, more startlingly, through some kind of implanted intercranial electrode system than can both, electrically modulate key areas of the brain, and remotely monitor behaviorally inappropriate neurological activity.
This isn’t the first time Canavero has suggested extraordinary medical experiments. You might remember his name from his ongoing work to be the first surgeon to perform a human head transplant.
[…]
“This is not the magic bullet that’s going to wipe away aggression and crime,” says Raine. “But could transcranial direct-current stimulation be offered as an intervention technique for first-time offenders to reduce their likelihood of recommitting a violent act?”
The key question of consent is one that many researchers aren’t really grappling with. Of course, there’s no chance convicted criminals would ever be forced to undergo this kind of procedure in a future where neuromodulation is integrated into our legal system. And behavioral alterations through electrical brain stimulation would never be forced upon people who don’t comply to social norms – right?
This is the infinitely compelling brave new world of neuroscience.
Wedding proposals are just one of the many minefields you have to navigate on social media platforms, and Ivan Miranda isn’t making things any easier. He’s designed and built an autonomous printer that can draw messages in sand, so now’s probably a good time to brace yourself for an endless barrage of “will you marry me?” beach proposals clogging up your feeds.
Miranda’s sand printer uses techniques borrowed from the classic dot-matrix printers that were a hallmark of home publishing in the ‘80s and ‘90s. An over-sized print heads travels back and forth between sets of large wheels that slowly roll the entire printer across the beach. As the print head moves, an etching tool lowers and raises to carve lines in the sand that eventually form longer messages.
It’s a slow process, especially for those of us who’ve become accustomed to speedy laser printers churning out multiple pages per minute. But the results are far more Instagram-friendly than trying to write an endearing message in the sand with a stick.
Across the continent, migrants are being confronted by a booming mobile forensics industry that specialises in extracting a smartphone’s messages, location history, and even WhatsApp data. That information can potentially be turned against the phone owners themselves.
In 2017 both Germany and Denmark expanded laws that enabled immigration officials to extract data from asylum seekers’ phones. Similar legislation has been proposed in Belgium and Austria, while the UK and Norway have been searching asylum seekers’ devices for years.
Following right-wing gains across the EU, beleaguered governments are scrambling to bring immigration numbers down. Tackling fraudulent asylum applications seems like an easy way to do that. As European leaders met in Brussels last week to thrash out a new, tougher framework to manage migration —which nevertheless seems insufficient to placate Angela Merkel’s critics in Germany— immigration agencies across Europe are showing new enthusiasm for laws and software that enable phone data to be used in deportation cases.
Admittedly, some refugees do lie on their asylum applications. Omar – not his real name – certainly did. He travelled to Germany via Greece. Even for Syrians like him there were few legal alternatives into the EU. But his route meant he could face deportation under the EU’s Dublin regulation, which dictates that asylum seekers must claim refugee status in the first EU country they arrive in. For Omar, that would mean settling in Greece – hardly an attractive destination considering its high unemployment and stretched social services.
Last year, more than 7,000 people were deported from Germany according to the Dublin regulation. If Omar’s phone were searched, he could have become one of them, as his location history would have revealed his route through Europe, including his arrival in Greece.
But before his asylum interview, he met Lena – also not her real name. A refugee advocate and businesswoman, Lena had read about Germany’s new surveillance laws. She encouraged Omar to throw his phone away and tell immigration officials it had been stolen in the refugee camp where he was staying. “This camp was well-known for crime,” says Lena, “so the story seemed believable.” His application is still pending.
Omar is not the only asylum seeker to hide phone data from state officials. When sociology professor Marie Gillespie researched phone use among migrants travelling to Europe in 2016, she encountered widespread fear of mobile phone surveillance. “Mobile phones were facilitators and enablers of their journeys, but they also posed a threat,” she says. In response, she saw migrants who kept up to 13 different SIM cards, hiding them in different parts of their bodies as they travelled.
[…]
Denmark is taking this a step further, by asking migrants for their Facebook passwords. Refugee groups note how the platform is being used more and more to verify an asylum seeker’s identity.
[…]
The Danish immigration agency confirmed they do ask asylum applicants to see their Facebook profiles. While it is not standard procedure, it can be used if a caseworker feels they need more information. If the applicant refused their consent, they would tell them they are obliged under Danish law. Right now, they only use Facebook – not Instagram or other social platforms.
[…]
“In my view, it’s a violation of ethics on privacy to ask for a password to Facebook or open somebody’s mobile phone,” says Michala Clante Bendixen of Denmark’s Refugees Welcome movement. “For an asylum seeker, this is often the only piece of personal and private space he or she has left.”
Information sourced from phones and social media offers an alternative reality that can compete with an asylum seeker’s own testimony. “They’re holding the phone to be a stronger testament to their history than what the person is ready to disclose,” says Gus Hosein, executive director of Privacy International. “That’s unprecedented.”
Privacy campaigners note how digital information might not reflect a person’s character accurately. “Because there is so much data on a person’s phone, you can make quite sweeping judgements that might not necessarily be true,” says Christopher Weatherhead, technologist at Privacy International.
[…]
Privacy International has investigated the UK police’s ability to search phones, indicating that immigration officials could possess similar powers. “What surprised us was the level of detail of these phone searches. Police could access information even you don’t have access to, such as deleted messages,” Weatherhead says.
His team found that British police are aided by Israeli mobile forensic company Cellebrite. Using their software, officials can access search history, including deleted browsing history. It can also extract WhatsApp messages from some Android phones.