UK Intelligence Agencies Are Planning a Major Increase in ‘Large-Scale Data Hacking’

Intelligence agencies in the UK are preparing to “significantly increase their use of large-scale data hacking,” the Guardian reported on Saturday, in a move that is already alarming privacy advocates.

According to the Guardian, UK intelligence officials plan to increase their use of the “bulk equipment interference (EI) regime”—the process by which the Government Communications Headquarters, the UK’s top signals intelligence and cybersecurity agency, collects bulk data off foreign communications networks—because they say targeted collection is no longer enough. The paper wrote:

A letter from the security minister, Ben Wallace, to the head of the intelligence and security committee, Dominic Grieve, quietly filed in the House of Commons library last week, states: “Following a review of current operational and technical realities, GCHQ have … determined that it will be necessary to conduct a higher proportion of ongoing overseas focused operational activity using the bulk EI regime than was originally envisaged.”

The paper noted that during the passage of the 2016 Investigatory Powers Act, which expanded hacking powers available to police and intelligence services including bulk data collection for the latter, independent terrorism legislation reviewer Lord David Anderson asserted that bulk powers are “likely to be only sparingly used.” As the Guardian noted, just two years later, UK intelligence officials are claiming this is no longer the case due to growing use of encryption:

… The intelligence services claim that the widespread use of encryption means that targeted hacking exercises are no longer effective and so more large-scale hacks are becoming necessary. Anderson’s review noted that the top 40 online activities relevant to MI5’s intelligence operations are now encrypted.

“The bulk equipment interference power permits the UK intelligence services to hack at scale by allowing a single warrant to cover entire classes of property, persons or conduct,” Scarlet Kim, a legal officer at UK civil liberties group Liberty International, told the paper. “It also gives nearly unfettered powers to the intelligence services to decide who and when to hack.”

Liberty also took issue with the intelligence agencies’ 180 on how often the bulk powers would be used, as well as with policies that only allow the investigatory powers commissioner to gauge the impact of a warrant after the hacking is over and done with.

“The fact that you have the review only after the privacy has been infringed upon demonstrates how worrying this situation is,”

Source: UK Intelligence Agencies Are Planning a Major Increase in ‘Large-Scale Data Hacking’

Millions of smartphones were taken offline by an expired certificate

Ericsson has confirmed that a fault with its software was the source of yesterday’s massive network outage, which took millions of smartphones offline across the UK and Japan and created issues in almost a dozen countries. In a statement, Ericsson said that the root cause was an expired certificate, and that “the faulty software that has caused these issues is being decommissioned.” The statement notes that network services were restored to most customers on Thursday, while UK operator O2 said that its 4G network was back up as of early Friday morning.

Although much of the focus was paid to outages on O2 in the UK and Softbank in Japan. Ericsson later confirmed to Softbank that issues had simultaneously affected telecom carriers who’d installed Ericsson-made devices across a total of 11 countries. Softbank said that the outage affected its own network for just over four hours.

Source: Millions of smartphones were taken offline by an expired certificate – The Verge

Windows 10 security question: How do miscreants use these for post-hack persistence?

Crafty infosec researchers have figured out how to remotely set answers to Windows 10’s password reset questions “without even executing code on the targeted machine”.

Thanks to some alarmingly straightforward registry tweaks allied with a simple Python script, Illusive Networks’ Magal Baz and Tom Sela were not only able to remotely define their own choice of password reset answers, they were also able to revert local users’ password changes.

Part of the problem is that Windows 10’s password reset questions are in effect hard-coded; you cannot define your own questions, limiting users to picking one of Microsoft’s six. Thus questions such as “what was your first’s pet name” are now defending your box against intruders.

The catch is that to do this, one first needs suitable account privileges. This isn’t an attack vector per se but it is something that an attacker who has already gained access to your network could use to give themselves near-invisible persistence on local machines, defying attempts to shut them out.


“In order to prevent people from reusing their passwords, Windows stores hashes of the old passwords. They’re stored under AES in the registry. If you have access to the registry, it’s not that hard to read them. You can use an undocumented API and reinstate the hash that was active just before you changed it. Effectively I’m doing a password change and nobody is going to notice that,” he continued, explaining that he’d used existing features in the post-exploitation tool Mimikatz to achieve that.

As for protecting against this post-attack persistence problem? “Add additional auditing and GPO settings,” said Sela. The two also suggested that Microsoft allows custom security questions as well as the ability to disable the feature altogether in Windows 10 Enterprise. The presentation slides are available here (PDF)

Source: Windows 10 security question: How do miscreants use these for post-hack persistence? • The Register

I Tried Predictim AI That Scans for ‘Risky’ Babysitters. Turns out founders don’t have kids

The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.

“We take ethics and bias extremely seriously,” Sal Parsa, Predictim’s CEO, tells me warily over the phone. “In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.”

At issue is the fact that I’ve used Predictim to scan a handful of people I very much trust with my own son. Our actual babysitter, Kianah Stover, returned a ranking of “Moderate Risk” (3 out 5) for “Disrespectfulness” for what appear to me to be innocuous Twitter jokes. She returned a worse ranking than a friend I also tested who routinely spews vulgarities, in fact. She’s black, and he’s white.

“I just want to clarify and say that Kianah was not flagged because she was African American,” says Joel Simonoff, Predictim’s CTO. “I can guarantee you 100 percent there was no bias that went into those posts being flagged. We don’t look at skin color, we don’t look at ethnicity, those aren’t even algorithmic inputs. There’s no way for us to enter that into the algorithm itself.”

Source: I Tried Predictim AI That Scans for ‘Risky’ Babysitters

So, the writer of this article tries to push for a racist angle, however unlikely this is. Oh well, it’s still a good article talking about how this system works.


When I entered the first person I aimed to scan into the system, Predictim returned a wealth of personal data—home addresses, names of relatives, phone numbers, alternate email addresses, the works. When I sent a screenshot to my son’s godfather of his scan, he replied, “Whoa.”

The goal was to allow parents to make sure they had found the right person before proceeding with the scan, but that’s an awful lot of data.


After you confirm the personal details and initiate the scan, the process can take up to 48 hours. You’ll get an email with a link to your personalized dashboard, which contains all the people you’ve scanned and their risk rankings, when it’s complete. That dashboard looks a bit like the backend to a content management system, or website analytics service Chartbeat, for those who have the misfortune of being familiar with that infernal service.


Potential babysitters are graded on a scale of 1-5 (5 being the riskiest) in four categories: “Bullying/Harassment,” “Disrespectful Attitude,” “Explicit Content,” and “Drug use.”


Neither Parsa nor Simonoff [Predictim’s founders – ed] have children, though Parsa is married, and both insist they are passionate about protecting families from bad babysitters. Joel, for example, once had a babysitter who would drive he and his brother around smoking cigarettes in the car. And Parsa points to Joel’s grandfather’s care provider. “Joel’s grandfather, he has an individual coming in and taking care of him—it’s kind of the elderly care—and all we know about that individual is that yes, he hasn’t done a—or he hasn’t been caught doing a crime.”


To be fair, I scanned another friend of mine who is black—someone whose posts are perhaps the most overwhelmingly positive and noncontroversial of anyone on my feed—and he was rated at the lowest risk level. (If he wasn’t, it’d be crystal that the thing was racist.) [Wait – what?!]

And Parsa, who is Afghan, says that he has experienced a lifetime of racism himself, and even changed his name from a more overtly Muslim name because he couldn’t get prospective employers to return his calls despite having top notch grades and a college degree. He is sensitive to racism, in other words, and says he made an effort to ensure Predictim is not. Parsa and Simonoff insist that their system, while not perfect, can detect nuances and avoid bias.

The predictors they use also seem to be a bit overly simplistic and unuanced. But I bet it’s something Americans will like – another way to easily devolve responsibility of childcare.


Uber’s Arbitration Policy Comes Back to Bite It in the Ass

Over 12,000 Uber drivers found a way to weaponize the ridesharing platform’s restrictive contract in what’s possibly the funniest labor strategy of the year.

But first: a bit of background. One of the more onerous aspects of the gig economy is its propensity to include arbitration agreements in the terms of service—you know, the very long document no one really reads—governing the rights of its workers. These agreements prohibit workers from suing gig platforms in open court, generally giving the company greater leverage and saving it from public embarrassment. Sometimes arbitration is binding; in Uber’s case, driver’s can opt out—but only within 30 days of signing, and very few seem to realize they have the option.

Until an unfavorable U.S. Supreme Court ruling earlier this year, independent contractors often joined class-action lawsuits anyway, arguing (sometimes successfully) that they ought to have been classified as employees from the get-go. With that avenue of remuneration cut off, a group of 12,501 Uber drivers found a new option that hinges on the company’s own terms of service. While arbitrating parties are responsible for paying for their own attorneys, the terms state that “in all cases where required by law, the Company [Uber] will pay the Arbitrator’s and arbitration fees.”

If today’s petition in California’s Northern District Court is accurate, those arbitration fees add up rather quickly.

A group of 12,501 drivers opted to take Uber at its word, individually bringing their cases up for arbitration, overwhelming the infrastructure that’s meant to divide and conquer. “As of November 13, 2018, 12,501 demands have been filed with JAMS,” the notice states. (JAMS refers to the arbitration service Uber uses for this purpose.) Continuing on, emphasis ours: “Of those 12,501 demands, in only 296 has Uber paid the initiating filing fees necessary for an arbitration to commence […] only 47 have appointed arbitrators, and […] in only six instances has Uber paid the retainer fee of the arbitrator to allow the arbitration to move forward.” (Emphasis ours.)

While a JAMS representative was not immediately available for comment, the cause of the holdup is Uber itself, according to the notice:

Uber knows that its failure to pay the filing fees has prevented the arbitrations from commencing. Throughout this process, JAMS has repeatedly advised Uber that JAMS is “missing the NON-REFUNDABLE filing fee of $1,500 for each demand, made payable to JAMS.” JAMS has also informed Uber that “[u]ntil the Filing Fee is received we will be unable to proceed with the administration of these matters.

We have no reason to assume this fee would be different based on the nature of each case, so some back-of-the-envelope math indicates the filings alone would cost Uber—a company that already loses sickening amounts of money—over $18.7 million. We’ve reached out to Uber for comment and to learn if they have an estimate of what that number would be after attorney fees and other expenses.

Source: Uber’s Arbitration Policy Comes Back to Bite It in the Ass

Australia now has encryption-busting laws as Labor capitulates

Labor has backed down completely on its opposition to the Assistance and Access Bill, and in the process has been totally outfoxed by a government that can barely control the floor of Parliament.

After proposing a number of amendments to the Bill, which Labor party members widely called out as inappropriate in the House of Representatives on Thursday morning, the ALP dropped its proposals to allow the Bill to pass through Parliament before the summer break.

“Let’s just make Australians safer over Christmas,” Bill Shorten said on Thursday evening.

“It’s all about putting people first.”

Shorten said Labor is letting the Bill through provided the government agrees to amendments in the new year.

Under the new laws, Australian government agencies would be able to issue three kinds of notices:

  • Technical Assistance Notices (TAN), which are compulsory notices for a communication provider to use an interception capability they already have;
  • Technical Capability Notices (TCN), which are compulsory notices for a communication provider to build a new interception capability, so that it can meet subsequent Technical Assistance Notices; and
  • Technical Assistance Requests (TAR), which have been described by experts as the most dangerous of all.

Source: Australia now has encryption-busting laws as Labor capitulates | ZDNet

Australia now is a surveillance state.

Reddit, YouTube, Others Push Against EU Copyright Directive – even the big guys think this is a bad idea. Hint: aside from it being copyright, it’s a REALLY bad idea

With Tumblr’s decision this week to ban porn on its platform, everyone’s getting a firsthand look at how bad automated content filters are at the moment. Lawmakers in the European Union want a similar system to filter copyrighted works and, despite expert consensus that this will just fuck up the internet, the legislation moves forward. Now some of the biggest platforms on the web insist we must stop it.

YouTube, Reddit, and Twitch have recently come out publicly against the EU’s new Copyright Directive, arguing that the impending legislation could be devastating to their businesses, their users, and the internet at large.

The Copyright Directive is the first update to the group of nation’s copyright law since 2001, and it’s a major overhaul that is intended to claw back some of the money that copyright holders believe they’ve lost since the internet use exploded around the globe. Fundamentally, its provisions are supposed to punish big platforms like Google for profiting off of copyright infringement and siphon some income back into the hands of those to which it rightfully belongs.

Unfortunately, the way it’s designed will likely make it more difficult for smaller platforms, harm the free exchange information, kill memes, make fair use more difficult to navigate—all the while, tech giants will have the resources to survive the wreckage. You don’t have to take my word for it, listen to Tim-Berners Lee, the father of the world wide web, and the other 70 top technologists that signed a letter arguing against the legislation back in June.

So far, this issue hasn’t received the kind of attention that, say, net neutrality did, at least in part because it’s very complicated to explain and it takes a while for these kinds of things to sink in. We’ve outlined the details in the past on multiple occasions. The main thing to understand is that critics take issue with two pieces of the legislation.

Article 11, better known as a “link tax,” would require online platforms to purchase a license to link out to other sites or quotes from articles. That’s the part that threatens the free spread of information.

Article 13 dictates that online platforms install some sort of monitoring system that lets copyright holders upload their work for automatic detection. If something sneaks by the system’s filters, the platform could face full penalties for copyright infringement. For example, a SpongeBob meme could be flagged and blocked because of its source image belonging to Nickelodeon; or a dumb vlog could be flagged and blocked because there’s a sponge in the background and the dumb filter thought it was SpongeBob.

Source: Reddit, YouTube, Others Push Against EU Copyright Directive

Facebook Well Aware That Tracking Contacts Is Creepy: Emails

Back in 2015, Facebook had a pickle of a problem. It was time to update the Android version of the Facebook app, and two different groups within Facebook were at odds over what the data grab should be.

The business team wanted to get Bluetooth permissions so it could push ads to people’s phones when they walked into a store. Meanwhile, the growth team, which is responsible for getting more and more people to join Facebook, wanted to get “Read Call Log Permission” so that Facebook could track everyone whom Android user called or texted with in order to make better friend recommendations to them. (Yes, that’s how Facebook may have historically figured out with whom you went on one bad Tinder date and then plopped them into “People You May Know.”) According to internal emails recently seized by the UK Parliament, Facebook’s business team recognized that what the growth team wanted to do was incredibly creepy and was worried it was going to cause a PR disaster.

In a February 4, 2015, email that encapsulates the issue, Facebook Bluetooth Beacon product manager Mike Lebeau is quoted saying that the request for “read call log” permission was a “pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.”

LeBeau was worried because a “screenshot of the scary Android permissions screen becomes a meme (as it has in the past), propagates around the web, it gets press attention, and enterprising journalists dig into what exactly the new update is requesting.” He suggested a possible headline for those journalists: “Facebook uses new Android update to pry into your private life in ever more terrifying ways – reading your call logs, tracking you in businesses with beacons, etc.” That’s a great and accurate headline. This guy might have a future as a blogger.

At least he called the journalists “enterprising” instead of “meddling kids.”

Then a man named Yul Kwon came to the rescue saying that the growth team had come up with a solution! Thanks to poor Android permission design at the time, there was a way to update the Facebook app to get “Read Call Log” permission without actually asking for it. “Based on their initial testing, it seems that this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,” Kwon is quoted. “It would still be a breaking change, so users would have to click to upgrade, but no permissions dialog screen. They’re trying to finish testing by tomorrow to see if the behavior holds true across different versions of Android.”

Oh yay! Facebook could suck more data from users without scaring them by telling them it was doing it! This is a little surprising coming from Yul Kwon because he is Facebook’s chief ‘privacy sherpa,’ who is supposed to make sure that new products coming out of Facebook are privacy-compliant. I know because I profiled him, in a piece that happened to come out the same day as this email was sent. A member of his team told me their job was to make sure that the things they’re working on “not show up on the front page of the New York Times” because of a privacy blow-up. And I guess that was technically true, though it would be more reassuring if they tried to make sure Facebook didn’t do the creepy things that led to privacy blow-ups rather than keeping users from knowing about the creepy things.

I reached out to Facebook about the comments attributed to Kwon and will update when I hear back.

Thanks to this evasion of permission requests, Facebook users did not realize for years that the company was collecting information about who they called and texted, which would have helped explain to them why their “People You May Know” recommendations were so eerily accurate. It only came to light earlier this year, three years after it started, when a few Facebook users noticed their call and text history in their Facebook files when they downloaded them.

When that was discovered March 2018, Facebook played it off like it wasn’t a big deal. “We introduced this feature for Android users a couple of years ago,” it wrote in a blog post, describing it as an “opt-in feature for people using Messenger or Facebook Lite on Android.”

Facebook continued: “People have to expressly agree to use this feature. If, at any time, they no longer wish to use this feature they can turn it off in settings, or here for Facebook Lite users, and all previously shared call and text history shared via that app is deleted.”

Facebook included a photo of the opt-in screen in its post. In small grey font, it informed people they would be sharing their call and text history.

This particular email was seized by the UK Parliament from the founder of a start-up called Six4Three. It was one of many internal Facebook documents that Six4Three obtained as part of discovery in a lawsuit it’s pursuing against Facebook for banning its Pikinis app, which allowed Facebook users to collect photos of their friends in bikinis. Yuck.

Facebook has a lengthy response to many of the disclosures in the documents including to the discussion in this particular email:

Call and SMS History on Android

This specific feature allows people to opt in to giving Facebook access to their call and text messaging logs in Facebook Lite and Messenger on Android devices. We use this information to do things like make better suggestions for people to call in Messenger and rank contact lists in Messenger and Facebook Lite. After a thorough review in 2018, it became clear that the information is not as useful after about a year. For example, as we use this information to list contacts that are most useful to you, old call history is less useful. You are unlikely to need to call someone who you last called over a year ago compared to a contact you called just last week.

Facebook still doesn’t like to mention that this feature is key to making creepily accurate suggestions as to people you may know.

Source: Facebook Well Aware That Tracking Contacts Is Creepy: Emails

Marriott’s breach response is so bad, security experts are filling in the gaps

Last Friday, Marriott sent out millions of emails warning of a massive data breach — some 500 million guest reservations had been stolen from its Starwood database.

One problem: the email sender’s domain didn’t look like it came from Marriott at all.

Marriott sent its notification email from “,” which is registered to a third party firm, CSC, on behalf of the hotel chain giant. But there was little else to suggest the email was at all legitimate — the domain doesn’t load or have an identifying HTTPS certificate. In fact, there’s no easy way to check that the domain is real, except a buried note on Marriott’s data breach notification site that confirms the domain as legitimate.

But what makes matters worse is that the email is easily spoofable.


Take “” To the untrained eye, it looks like the legitimate domain — but many wouldn’t notice the misspelling. Actually, it belongs to Jake Williams, founder of Rendition Infosec, to warn users not to trust the domain.

“I registered the domains to make sure that scammers didn’t register the domains themselves,” Williams told TechCrunch. “After the Equifax breach, it was obvious this would be an issue, so registering the domains was just a responsible move to keep them out of the hands of criminals.”


Williams isn’t the only one who’s resorted to defending Marriott customers from cybercriminals. Nick Carr, who works at security giant FireEye, registered the similarly named “” on the day of the Marriott breach.

“Please watch where you click,” he wrote on the site. “Hopefully this is one less site used to confuse victims.” Had Marriott just sent the email from its own domain, it wouldn’t be an issue.

Source: Marriott’s breach response is so bad, security experts are filling in the gaps — at their own expense | TechCrunch

FYI: NASA has sent a snatch-and-grab spacecraft to an asteroid to seize some rock and send it back to Earth

NASA’s mission to send a probe to an asteroid, dig up a chunk, and send the material back to Earth is now half-way complete. The agency says its OSIRIS-REx spacecraft has reached its hunk-of-rock target after a trip lasting two years and two billion miles.

The spacecraft, technically the Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is orbiting the asteroid Bennu, a diamond-shaped chunk of space rock with a varying orbit that keeps it around 100 million miles (160 million kilometers) from Earth.

“Initial data from the approach phase show this object to have exceptional scientific value,” said Dante Lauretta, the mission’s principal investigator. “We can’t wait to get to work studying and characterizing Bennu’s rough and rugged surface to find out where the right spot is to collect the sample and bring it back to Earth.”

“Today has been very exciting, but the true nail-biting moment will be the sample collection. The best times are ahead of us, so stay tuned. The exploration of Bennu has just begun, and we have a lifetime of adventure ahead of us.”

Bennu is thought to be a lump of rock from the earliest days of the Solar System. After a couple of flybys, OSIRIS-REx will settle into a steady orbit a few miles above the surface. It will spend the next 505 days circling the asteroid and scanning it with cameras, LIDAR and spectrographs to try and find out as much information as possible about its composition.

The asteroid is of particular interest to NASA because it may contain water and clays from the protoplasmic disc that formed the Sun and the planets in our Solar System. So, once it has picked the likeliest spot and safest place to find some of these materials, the spacecraft will extend the Touch-And-Go Sample Acquisition Mechanism (TAGSAM) – a 3.35-meter (11 ft) robotic arm – and grab a handful of matter from the surface.

Once that’s done, and assuming OSIRIS-REx doesn’t hit the surface, the spacecraft will begin the long voyage back to Earth. It’s expected to arrive on September 2023 and the sealed sample contained will reenter the atmosphere using a heat shield and float back to scientists via parachute into the Utah desert.

Source: FYI: NASA has sent a snatch-and-grab spacecraft to an asteroid to seize some rock and send it back to Earth • The Register

Nvidia Uses AI to Render Virtual Worlds in Real Time

Nvidia announced that AI models can now draw new worlds without using traditional modeling techniques or graphics rendering engines. This new technology uses an AI deep neural network to analyze existing videos and then apply the visual elements to new 3D environments.

Nvidia claims this new technology could provide a revolutionary step forward in creating 3D worlds because the AI models are trained from video to automatically render buildings, trees, vehicles, and objects into new 3D worlds, instead of requiring the normal painstaking process of modeling the scene elements.

But the project is still a work in progress. As we can see from the image on the right, which was generated in real time on a Nvidia Titan V graphics card using its Tensor cores, the rendered scene isn’t as crisp as we would expect in real life, and it isn’t as clear as we would expect with a normal modeled scene in a 3D environment. However, the result is much more impressive when we see the real-time output in the YouTube video below. The key here is speed: The AI generates these scenes in real time.

Nvidia AI Rendering

Nvidia’s researchers have also used this technique to model other motions, such as dance moves, and then apply those same moves to other characters in real-time video. That does raise moral questions, especially given the proliferation of altered videos like deep fakes, but Nvidia feels that it is an enabler of technology and the issue should be treated as a security problem that requires a technological solution to prevent people from rendering things that aren’t real.

The big question is when this will come to the gaming realm, but Nvidia cautions that this isn’t a shipping product yet. The company did theorize that it would be useful for enhancing older games by analyzing the scenes and then applying trained models to improve the graphics, among many other potential uses. It could also be used to create new levels and content in older games. In time, the company expects the technology to spread and become another tool in the game developers’ toolbox. The company has open sourced the project, so anyone can download and begin using it today, though it is currently geared towards AI researchers.

Nvidia says this type of AI analysis and scene generation can occur with any type of processor, provided it can deliver enough AI throughput to manage the real-time feed. The company expects that performance and image quality will improve over time.

Nvidia sees this technique eventually taking hold in gaming, automotive, robotics, and virtual reality, but it isn’t committing to a timeline for an actual product. The work remains in the lab for now, but the company expects game developers to begin working with the technology in the future. Nvidia is also conducting a real-time demo of AI-generated worlds at the AI research-focused NeurIPS conference this week.

Source: Nvidia Uses AI to Render Virtual Worlds in Real Time

When Discounts Hurt Sales: Too much discounting and too many positive reviews can hurt sales

By tracking the sales of 19,978 deals on and conducting a battery of identification and falsification tests, we find that deep discounts reduce sales. A 1% increase in a deal’s discount decreases sales by 0.035%–0.256%. If a merchant offers an additional 10% discount from the sample mean of 55.6%, sales could decrease by 0.63%–4.60%, or 0.80–5.24 units and $42–$275 in revenue. This negative effect of discount is more prominent among credence goods and deals with low sales, and when the deals are offered in cities with higher income and better education. Our findings suggest that consumers are concerned about product quality, and excessive discounts may reduce sales immediately. A follow-up lab experiment provides further support to this quality-concern explanation. Furthermore, it suggests the existence of a “threshold” effect: the negative effect on sales is present only when the discount is sufficiently high. Additional empirical analysis shows that deals displaying favorable third-party support, such as Facebook fans and online reviews, are more susceptible to this adverse discount effect.

Source: When Discounts Hurt Sales: The Case of Daily-Deal Markets | Information Systems Research

Hack of 100 Million Quora Users Could Be Worse Than it Sounds

On Monday, the question and answer site Quora announced that a third-party was able to gain access to virtually every data point the company keeps on 100 million users. Even if you don’t recall having a Quora account, you might want to make sure.

In a blog post, Quora CEO Adam D’Angelo explained that the company first noticed the data breach on Friday and has since enlisted independent security researchers to help investigate what happened and mitigate the damage. D’Angelo said that affected users should be receiving an email that explains the situation, but if you have a Quora account, it’s probably a good idea to go ahead and change your password—especially if you reuse passwords. In all, the attackers were able to compromise a lot of data. Quora says that information includes:

  • Account information, e.g. name, email address, encrypted (hashed) password, data imported from linked networks when authorized by users
  • Public content and actions, e.g. questions, answers, comments, upvotes
  • Non-public content and actions, e.g. answer requests, downvotes, direct messages (note that a low percentage of Quora users have sent or received such messages)

Fortunately, Quora says it has not stored any identifying information associated with anonymous inquiries and replies.

For users, the biggest immediate concern should be that part about hackers accessing “data imported from linked networks.” Quora allows users to sign in with Facebook or Google and it’s possible that personal information from one of those networks also made it into the wrong hands. We’ve asked all three companies for more details on exactly what was compromised but we did not receive an immediate reply.

We also asked Quora what type of cryptographic hashing method it uses. The hackers should only be able to figure out the password through brute-force guessing and that takes longer depending on the complexity of the hash.

The good news is that there’s no financial information associated with Quora users, the bad news is that the website is more like a social network than it might seem. People ask personal questions that could help draw a personality profile and others give answers that could do the same. Earlier this year, when Facebook admitted that it had lost control of 87 million users data, the general public was reminded that data breaches aren’t just about identity theft. In that case, a firm working for the 2016 Trump presidential campaign obtained access to the data, raising concerns that it was used for targeted political messaging. The firm has disputed the number of users’ data it obtained and maintains that none of the data was directly employed during the 2016 election.

For now, check your inbox for any notifications and you can read an FAQ here.


Source: Hack of 100 Million Quora Users Could Be Worse Than it Sounds

China Set to Launch First-Ever Spacecraft to the Far Side of the Moon, try to grow plants there and listen to radio waves blocked off by the moon

Early in the New Year, if all goes well, the Chinese spacecraft Chang’e-4 will arrive where no craft has been before: the far side of the Moon. The mission is scheduled to launch from Xichang Satellite Launch Centre in Sichuan province on December 8. The craft, comprising a lander and a rover, will then enter the Moon’s orbit, before touching down on the surface.

If the landing is successful, the mission’s main job will be to investigate this side of the lunar surface, which is peppered with many small craters. The lander will also conduct the first radio astronomy experiments from the far side of the Moon—and the first investigations to see whether plants will grow in the low-gravity lunar environment.

Source: China Set to Launch First-Ever Spacecraft to the Far Side of the Moon – Scientific American

Researchers discover SplitSpectre, a new Spectre-like CPU attack via Javascript

Three academics from Northeastern University and three researchers from IBM Research have discovered a new variation of the Spectre CPU vulnerability that can be exploited via browser-based code.

The research team says this new CPU vulnerability is, too, a design flaw in the microarchitecture of modern processors that can be exploited by attacking the process of “speculative execution,” an optimization technique used to improve CPU performance.

The vulnerability, which researchers codenamed SplitSpectre, is a variation of the original Spectre v1 vulnerability discovered last year and which became public in January 2018.

The difference in SplitSpectre is not in what parts of a CPU’s microarchitecture the flaw targets, but how the attack is carried out.

According to the research team, a SplitSpectre attack is far easier to execute than an original Spectre attack


For their academic paper, the research team says it successfully carried out a SplitSpectre attack against Intel Haswell and Skylake CPUs, and AMD Ryzen processors, via SpiderMonkey 52.7.4, Firefox’s JavaScript engine.

Source: Researchers discover SplitSpectre, a new Spectre-like CPU attack | ZDNet

Twitter user hacks 50,000 printers to tell people to subscribe to PewDiePie

A Twitter user using the pseudonym of @TheHackerGiraffe has hacked over 50,000 printers to print out flyers telling people to subscribe to PewDiePie’s YouTube channel.

The messages have been sent out yesterday, November 29, and have caused quite the stirr among the users who received them, as they ended up on a bunch of places, from high-end multi-functional printers at large companies to small handheld receipt printers at gas stations and restaurants.

The only condition was that the printer was connected to the Internet, used old firmware, and had “printing” ports left exposed online.

The message the printers received was a simple one. It urged people to subscribe to PewDiePie’s YouTube channel in order for PewDiePie –a famous YouTuber from Sweden, real name Felix Kjellberg– to keep the crown of most subscribed to YouTube channel.

If this sounds …odd… it’s because over the past month, an Indian record label called T-Series has caught up and surpassed PewDiePie, once considered untouchable in terms of YouTube followers.

The Swedish Youtube star made a comeback after his fans banded together in various social media campaigns, but T-Series is catching up with PewDiePie again.

Source: Twitter user hacks 50,000 printers to tell people to subscribe to PewDiePie | ZDNet

EU anti Geo-blocking comes into force: unlocking e-commerce in the EU

Under the new rules, traders will not be able to discriminate between customers with regard to the general terms and conditions – including prices – in three cases: for goods that are either delivered in a member state to which the trader offers delivery or are collected at a location agreed with the customer for electronically supplied services such as cloud, data warehousing and website hosting for services such as hotel accommodation and car rental which are received by the customer in the country where the trader operates

Under the new rules, traders will not be able to discriminate between customers with regard to the general terms and conditions – including prices – in three cases:

  • for goods that are either delivered in a member state to which the trader offers delivery or are collected at a location agreed with the customer
  • for electronically supplied services such as cloud, data warehousing and website hosting
  • for services such as hotel accommodation and car rental which are received by the customer in the country where the trader operates

Source: Geo-blocking: unlocking e-commerce in the EU – Consilium

Geo-blocking refers to practices used by online sellers that result in the denial of access to websites from other Member States. It also includes situations where access to a website is granted, but the customer from abroad is prevented from finalising the purchase or being asked to pay with a debit or credit card from a certain country. “Geo-discrimination” also takes place when buying goods and services off-line, e.g. when a consumer is physically present at the trader’s location but is either prevented from accessing a product or service or being offered different conditions.

The Geo-blocking Regulation aims to provide for more opportunities to consumers and businesses within the EU’s internal market. In particular, it addresses the problem of (potential) customers not being able to buy goods and services from traders located in a different Member State for reasons related to their nationality, place of residence or place of establishment, hence discriminating them when they try to access the best offers, prices or sales conditions compared to nationals or residents of the traders’ Member State.u

Above FAQ link has  more answers to questions


Reports of First Genetically Enhanced Babies Spark Outrage

Twin girls born earlier this month had their DNA altered to prevent them from contracting HIV, according to an Associated Press report. If confirmed, the births would signify the first gene-edited babies in human history—a stunning development that’s sparking an outcry from scientists and ethicists.

Professor He Jiankui of Shenzhen, China, made the announcement earlier today in Hong Kong, informing the Associated Press of his apparent achievement and releasing an accompanying video. He claims the twin girls were born earlier this month and that he altered their DNA with the CRISPR-cas9 gene-editing tool, which he did to confer a built-in immunity to the AIDS virus. The claim has yet to be independently confirmed, and the findings haven’t been published to a peer-reviewed journal; outside experts haven’t had an opportunity to corroborate the claims, or assess the efficacy or safety of the procedure.

A BBC article describes this news as “dubious,” but there’s reason to believe the claims could be true. Back in 2016, scientists in China used CRISPR to introduce a beneficial mutation that disables an immune-cell gene called CCR5, conferring immunity by knocking out a critical receptor, or mode of entry, for the HIV virus to infect a cell. The experiment showed that someday it might be possible to deliberately endow human DNA with this desirable mutation—the key word being “someday.” Immediately after the 2016 experiment, the scientists destroyed the embryos, saying more research will be required before modified embryos can be implanted in a mother’s womb.

Alarmingly, professor He has decided, quite unilaterally, to move ahead with this research, reportedly implanting the modified embryos into the mother’s womb—a step considered by most experts to be highly premature and reckless at this stage. Gene-editing of human embryos is sanctioned in the United States, but all embryos must be destroyed within a few days. A huge issue with this form of gene-editing is that it’s done on germline cells, which means introduced traits are heritable. Such is the case with these twins in China, who—if they are indeed genetically modified—will pass modified DNA down to any children they have. Scientists are still a long ways off from knowing if this procedure is effective and safe.

In this case, there’s good reason for doubt. The CCR5 gene is known to trigger offsetting conditions, such as a higher risk of contracting the West Nile Virus. Research suggests it also increases a person’s chance of dying from influenza. Also, CRISPR is a notoriously blunt instrument, and there’s no way of knowing if He’s procedure introduced knock-off effects, some of which wouldn’t be known until the girls reach maturity.

Details of the procedure are still scarce, such as the identity of the parents or where the research was conducted, but preliminary information acquired by AP is cause for concern.

The AP reports that CRISPR-cas9 gene editing was done during the in vitro fertilization, or IVF, stage. Several days later, the cells of the modified embryos were checked for signs of DNA editing. Of the 22 embryos edited, 11 were used in six implant attempts. Only one worked, resulting in the twin births. In all, some seven couples participated in the procedure.

Follow-up tests suggest one of the twins had just one copy of the intended gene alteration, while the other had both. Individuals with one copy of the mutated gene can still contract HIV, but they may have an increased ability to ward off the effects of the disease. Many experts say the procedure should not have been allowed to happen, but the decision to allow the implantation of the “partially” modified embryo was an even worse indiscretion, calling it a form of human experimentation.

Speaking to the AP, Dr. Kiran Musunuru, a University of Pennsylvania gene editing expert, said in this particular child, “there really was almost nothing to be gained in terms of protection against HIV and yet you’re exposing that child to all the unknown safety risks,” adding that the entire enterprise is “unconscionable” and “an experiment on human beings that is not morally or ethically defensible.”

Bioethicist Julian Savulescu from the University of Oxford described the experiment as “monstrous” in an interview with the BBC.

“Gene editing itself is experimental and is still associated with off-target mutations, capable of causing genetic problems early and later in life, including the development of cancer,” Savulescu told the BBC. “This experiment exposes healthy normal children to risks of gene editing for no real necessary benefit.”

If that’s not enough, this story gets even murkier.

He, who works at the Southern University of Science and Technology of China in Shenzhen, gave the university official notice of his experiment “long after he said he started it,” AP reports. It’s not clear if the participants understood the true nature of the experiment, which was described as an “AIDS vaccine development” program. The Shenzhen university said He’s work “seriously violated academic and ethics standards,” and an investigation is in the works. He, who owns two genetics companies in China, was reportedly assisted by U.S. scientist Michael Deem, who was an advisor to He when they worked together at Rice University in Houston. Deem also has stakes in both of He’s companies.

Condemnation of the procedure, however, is not universal among experts. Harvard geneticist George Church defended the alleged human gene-editing, telling AP that HIV is a “major and growing public health threat” and that the work done by He was “justifiable.”

A fascinating aspect of this alarming story is that He was not trying to cure a genetic disease. Rather, it was a deliberate attempt to endow humans with the capacity to ward off a future infection, namely one caused by the AIDS virus. In this sense, the procedure (if it happened in the way He is claiming), might be considered an enhancement rather than a therapy. As such, these girls may go down in history as the first enhanced humans produced by gene-editing.

Unfortunately, the brazen recklessness exhibited by He will now place a dark taint on that futuristic prospect. Yes, we may eventually use gene-editing to cure diseases and endow our species with new capacities—but such research cannot happen at the whim of rogue scientists.

[Associated Press and BBC]

Source: Reports of First Genetically Enhanced Babies Spark Outrage

Your phone indeed has ears that you may not know about – the companies that listen to noise in the background while apps that contain their software are open

: No, your phone is not “listening” to you in the strictest sense of the word. But, yes, all your likes, dislikes and preferences are clearly being heard by apps in your phone which you oh-so-easily clicked “agree” to the terms of which while installing.

How so?

If you are in India, the answer to the question will lead you to Zapr, a service backed by heavyweights such as the Rupert Murdoch-led media group Star, Indian e-commerce leader Flipkart, Indian music streaming service Saavn, and mobile phone maker Micromax, among more than a dozen others. The company owning Zapr is named Red Brick Lane Marketing Solutions Pvt Ltd. (Paytm founder Vijay Shekhar Sharma and Sanjay Nath, co-founder and managing partner, Blume Ventures, were early investors in Zapr but are no longer so, according to filings with the ministry of corporate affairs. Sharma and Blume are among the investors in Sourcecode Media Pvt Ltd, which owns FactorDaily.)

Zapr, in fact, is one of the few companies in the world that has developed a solution that uses your mobile device’s microphone to recognise the media content you are watching or listening to in order to help brands and channels understand consumer media consumption. In short, it monitors sounds around you to contextualise you better for advertising and marketing targeting.


Advertisers globally spend some $650 billion annually and this cohort believes better profiling consumers by analysing their ambient sounds helps target advertising better. This group includes Chinese company ACRCloud, Audible Magic from the US, and the Netherlands’s Betagrid Media — and, Zapr from India.

Cut back to the Zapr headquarters on Old Madras Road in Bengaluru. One of the apps that inspired Zapr’s founding team was the popular music detection and identification app Shazam. But, its three co-founders saw opportunity in going further. “Instead of detecting music, can we detect all kinds of medium? Can we detect television? Can we detect movies in a theatre? Can we detect video on demand? Can we really build a profile for a user about their media consumption habits… and that really became the idea, the vision we wanted to solve for,” Sandipan Mondal, CEO of Zapr Media Labs, said in an interview last week on Thursday.


But, Zapr’s tech comes with privacy and data concerns – lots of it. The way its tech gets into your phone is dodgy: its code ride on third-party apps ranging from news apps to gaming apps to video streaming apps. You might be downloading Hotstar or a Dainik Jagran app or a Chotta Beem app on your phone little knowing that Zapr’s or an equivalent audio monitoring code sits on those apps to listen to sounds around you in an attempt to see what media content you are consuming.

In most cases reviewed by FactorDaily in a two-week exercise, it was not obvious that the app would monitor audio via the smartphone or mobile device’s microphone for use by another party (Zapr) for ad targeting purposes. Some apps hinted about Zapr’s tech at the bottom of the app description and some in the form of a pop-up – an app from Nazara games, for instance, mentioned that it required mic access to ‘Record Audio for better presentation’. Sometimes, the pop-up app would show up a few days after the download. And, often, the disclosure was buried somewhere in the app’s privacy policy.

None of these apps made it clear explicitly what the audio access via the microphone was for. “The problem with apps which embed this technology is that their presence is not outright disclosed and is difficult to find. Also, there is not an easy way to find out the apps in the PlayStore that have this tech embedded in them,” said Thejesh G N, an info-activist and the founder of DataMeet, a community of data scientists and open data enthusiasts.

Source: Your phone indeed has ears that you may not know about | FactorDaily

A Chinese startup may have cracked solid-state batteries

According to Chinese media, Qing Tao Energy Development Co, a startup out of the technical Tsinghua University, has deployed a solid-state battery production line in Kunshan, East China. Reports claim the line has a capacity of 100MWh per year — which is planned to increase to 700MWh by 2020 — and that the company has achieved an energy density of more than 400Wh/kg, compared to new generation lithium-ion batteries that boast a capacity of around 250-300Wh/kg.

Source: A Chinese startup may have cracked solid-state batteries

Creepy Chinese AI shames CEO for jaywalking on public displays throughout city – but detected the CEO on an ad on a bus

Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo that displays images of people caught jaywalking by surveillance cameras.

That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus.

The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.


Since last year, many cities across China have cracked down on jaywalking by investing in facial recognition systems and advanced AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens.

First-tier cities like Beijing and Shanghai were among the first to employ those systems to help regulate traffic and identify drivers who violate road rules, while Shenzhen traffic police began displaying photos of jaywalkers on large screens at major intersections from April last year.

Source: Facial recognition snares China’s air con queen Dong Mingzhu for jaywalking, but it’s not what it seems | South China Morning Post

Be Warned: Customer Service Agents Can See What You’re Typing in Real Time on their website forms

Next time you’re chatting with a customer service agent online, be warned that the person on the other side of your conversation might see what you’re typing in real time. A reader sent us the following transcript from a conversation he had with a mattress company after the agent responded to a message he hadn’t sent yet.

Something similar recently happened to HmmDaily’s Tom Scocca. He got a detailed answer from an agent one second after he hit send.

Googling led Scocca to a live chat service that offers a feature it calls “real-time typing view” to allow agents to have their “answers prepared before the customer submits his questions.” Another live chat service, which lists McDonalds, Ikea, and Paypal as its customers, calls the same feature “message sneak peek,” saying it will allow you to “see what the visitor is typing in before they send it over.” Salesforce Live Agent also offers “sneak peak.”

On the upside, you get fast answers. On the downside, your thought process is being unknowingly observed. For the creators, this is technological magic, a deception that will result, they hope, in amazement and satisfaction. But once revealed by an agent who responds too quickly or one who responds before the question is asked, the trick falls apart, and what is left behind feels distinctly creepy, like a rabbit pulled from a hat with a broken neck. “Why give [customers] a fake ‘Send message’ button while secretly transmitting their messages all along?” asks Scocca.

This particular magic trick happens thanks to JavaScript operating in your browser and detecting what’s happening on a particular site in real time. It’s also how companies capture information you’ve entered into web forms before you’ve hit submit. Companies could lessen the creepiness by telling people their typing is seen in real time or could eliminate the send button altogether (but that would undoubtedly confuse people, as if the useless buttons in elevators to “close door” or the placebos to push at crosswalks disappeared overnight.).

Lest you think unexpected monitoring is limited to your digital interactions, know that you should be paranoid during telephone chats too. As the New York Times reported over a decade ago, during those calls where you are reassured of “being recorded for quality assurance purposes,” your conversation while on hold is recorded. So even if there is music playing, monitors may later listen to you fight with your spouse, sing a song, or swear about the agent you’re talking to.

Source: Be Warned: Customer Service Agents Can See What You’re Typing in Real Time

US told to quit sharing data with human rights-violating surveillance regime. Which one, you ask? That’d be the UK

UK authorities should not be granted access to data held by American companies because British laws don’t meet human rights obligations, nine nonprofits have said.

In a letter to the US Department of Justice, organisations including Human Rights Watch and the Electronic Frontier Foundation set out their concerns about the UK’s surveillance and data retention regimes.

They argue that the nation doesn’t adhere to human rights obligations and commitments, and therefore it should not be allowed to request data from US companies under the CLOUD Act, which Congress slipped into the Omnibus Spending Bill earlier this year.

The law allows US government to sign formal, bilateral agreements with other countries setting standards for cross-border investigative requests for digital evidence related to serious crime and terrorism.

It requires that these countries “adhere to applicable international human rights obligations and commitments or demonstrate respect for international universal human rights”. The civil rights groups say the UK fails to make the grade.

As such, it urged the US administration not to sign an executive order allowing the UK to request access to data, communications content and associated metadata, noting that the CLOUD Act “implicitly acknowledges” some of the info gathered might relate to US folk.

Critics are concerned this could then be shared with US law enforcement, thus breaking the Fourth Amendment, which requires a warrant to be served for the collection of such data.

Setting out the areas in which the UK falls short, the letter pointed to pending laws on counter-terrorism, saying that, as drafted they would “excessively restrict freedom of expression by criminalizing clicking on certain types of online content”.

Source: US told to quit sharing data with human rights-violating surveillance regime. Which one, you ask? That’d be the UK • The Register

mobile providers in NL urged to stop killing unused data and phone minutes, as technically the user has paid for it and if they exceed the maximum they are fined

Telecomaanbieders moeten stoppen met het laten vervallen van ongebruikte data en belminuten. Dat schrijft de Consumentenbond in een brief aan de tien grootste aanbieders.

Consumenten met een mobiel abonnement verliezen nu aan het einde van iedere maand hun ongebruikte belminuten en data binnen hun bundel. Tegelijkertijd betalen ze extra voor iedere minuut of MB die ze búiten hun bundel verbruiken. Soms tot wel 0,31 euro per minuut of 0,15 euro per MB.

Source: ‘Providers pak ongebruikte data en belminuten niet af’ – Emerce

OneDrive is broken: Microsoft’s cloudy storage drops from the sky for EU users

Oh, you tease

It is OneDrive’s turn to get a beating with the stick of fail as the service took a tumble this morning.

Issues first began appearing at around 08:00 GMT as users around Europe logged in, expecting to find their files, and found instead a picture of a bicycle with a flat tyre or a dropped ice cream cone. Oh, you guys!

The fact that Microsoft has a wide variety of images to illustrate failure will be of little comfort to users that depend on the cloud storage system.

OneDrive is Microsoft’s answer to the likes of DropBox and its ilk, allowing users to stash files (up to 1TB for an individual Office 365 subscriber) on Redmond’s servers and synchronise them to their devices or access through a web client.

Except now it doesn’t. We checked it out at Vulture Central and found that, yes, synchronisation had stopped, and while it was possible to log into the web portal for a teasing look at one’s files, actually trying to open them resulted in an error.

Even local Office 365 apps, such as Word, are jolly unhappy, reporting errors on saving documents due to the inaccessibility of the cloudy storage. The experience is a lesson on the consequences of too much dependence on the cloud.

Source: OneDrive is broken: Microsoft’s cloudy storage drops from the sky for EU users • The Register

the cloud strikes again

Skip to toolbar