WhatsApp Security Design Could Let an Infiltrator Add Members to Group Chats

Only admins can add new members to private groups. But the researchers found that anyone in control of the server can spoof the authentication process, essentially granting themselves the privileges necessary to add new members who can snoop on private conversations. The obvious examples that come to mind are hackers who manage to gain access to WhatsApp servers or a government successfully pressuring WhatsApp to give it access to targeted group chats.

Perhaps even more troubling, a compromised admin with control of the server could manipulate the messages that would alert group members that someone new had been added, according to the researchers. However, WhatsApp denies this is an issue.

Wired confirmed the researchers’ findings with a WhatsApp spokesperson. While the company, which is owned by Facebook, acknowledges the issue of server security, the spokesperson pushed back on the idea that attackers could block, cache, or otherwise prevent the alert that new members have been added.

Source: WhatsApp Security Design Could Let an Infiltrator Add Members to Group Chats [Updated]

What’s Slack Doing With Your Data?

More than six million people use Slack daily, spending on average more than two hours each day inside the chat app. For many employees, work life is contingent on Slack, and surely plenty of us use it for more than just, say, work talk. You probably have a #CATS and a women-only channel, and you’ve probably said something privately that you wouldn’t want shared with your boss. But that’s not really up to you.

When you want to have an intimate or contentious chat, you might send a direct message. Or perhaps you and a few others have started a private channel, ensuring that whatever you say is only seen by a handful of people. This may feel like a closed circuit between you and another person—or small group of people—but that space and the little lock symbol aren’t actually emblematic of complete privacy.

Do Slack employees have access to your chats? The short answer is: sort of. The long answer is… below. Can your company peek at your private DMs? It’s entirely possible. Slack’s FAQ pages help elucidate some of these concerns, but at times the answers are frustratingly vague and difficult to navigate. So we dug into it for you. Read more to find out what Slack—and your company—is actually doing with your data.

Source: What’s Slack Doing With Your Data?

The short is:
Yes, there are slack employees that can view your data. Channel owners can see everything in a channel, also direct messages. Slack gives your data to law enforcement upon request and won’t inform you. They don’t (and say won’t) sell it to third parties. Deletion is deletion. Slack, like any other company, can be hacked. Caveat emptor.

Wall Street Analysts Are Embarrassingly Bad At Predicting The Future, Study Finds

The researchers looked at a database of long-term growth forecasts made for all domestic companies listed on a major stock exchange. The forecasts are made in December each year, and predict how well a company’s stocks will do over the next three to five years. From 1981 to 2016, they found that the top 10 percent of stocks analysts were most hopeful about generally had poorer growth than the 10 percent of stocks they were most pessimistic about.

The paper found that investing in the stocks that analysts were most pessimistic in a given year about would have yielded an average 15 percent in extra returns (in stock terms, a profit) the following year, compared to a 3 percent return that would have been made from investing in the predicted champs.

The study, though it hasn’t yet been published in a peer-reviewed journal, is in fact merely an update of a classic study published in 1996; it too found a similarly stark contrast. Nor is this the only kind of study to find a clear gap between the professed stock expectations of analysts and actual reality. So the results aren’t exactly surprising.

Source: Wall Street Analysts Are Embarrassingly Bad At Predicting The Future, Study Finds

Stop us if you’ve heard this one: Apple’s password protection in macOS can be thwarted

An Apple developer has uncovered another embarrassing vulnerability in macOS High Sierra, aka version 10.13, that lets someone bypass part of the operating system’s password protections.This time, a vulnerable dialog box was found in the System Preferences panel for the App Store settings. The bug, reported by developer Eric Holtam to the Open Radar bug tracker, has since been verified by Mac-toting netizens.The bug allows a user logged in with admin rights (this is important to note) to get around the password requirement when making changes in the App Store settings panel. Open the App Store settings panel, click on the padlock to make changes, a password prompt pops up, type in any string of text, and the “password” is accepted, unlocking the preferences panel.Aaron Lint, veep of research at infosec biz Arxan, claimed the trick can also be used to bypass the login requirements for some other settings panels as well, but not the important “Users and Groups” and “Security and Privacy” controls.

Source: Stop us if you’ve heard this one: Apple’s password protection in macOS can be thwarted • The Register

Violating a Website’s Terms of Service Is Not a Crime, Federal Court Rules

the federal court of appeals heeded EFF’s advice and rejected an attempt by Oracle to hold a company criminally liable for accessing Oracle’s website in a manner it didn’t like. The court ruled back in 2012 that merely violating a website’s terms of use is not a crime under the federal computer crime statute, the Computer Fraud and Abuse Act. But some companies, like Oracle, turned to state computer crime statutes — in this case, California and Nevada — to enforce their computer use preferences. This decision shores up the good precedent from 2012 and makes clear — if it wasn’t clear already — that violating a corporate computer use policy is not a crime.

Source: Violating a Website’s Terms of Service Is Not a Crime, Federal Court Rules – Slashdot

Boffins tweak audio by 0.1% to fool speech recognition engines

a paper by Nicholas Carlini and David Wagner of the University of California Berkeley has explained off a technique to trick speech recognition by changing the source waveform by 0.1 per cent.

The pair wrote at arXiv that their attack achieved a first: not merely an attack that made a speech recognition SR engine fail, but one that returned a result chosen by the attacker.In other words, because the attack waveform is 99.9 per cent identical to the original, a human wouldn’t notice what’s wrong with a recording of “it was the best of times, it was the worst of times”, but an AI could be tricked into transcribing it as something else entirely: the authors say it could produce “it is a truth universally acknowledged that a single” from a slightly-altered sample.

It works every single time: the pair claimed a 100 per cent success rate for their attack, and frighteningly, an attacker can even hide a target waveform in what (to the observer) appears to be silence.

Source: Boffins tweak audio by 0.1% to fool speech recognition engines • The Register