This Solar System Catalog Could Be Key to Finding an Earth-Like Exoplanet

By searching for the telltale, periodic dimming of light from distant stars, astronomers can spot orbiting exoplanets tens to hundreds of light-years away. But how do they know what these bodies look like? Perhaps they first try to imagine how the planets in our own Solar System might appear to a faraway alien world.

A pair of scientists has released a detailed catalog of the colors, brightness, and spectral lines of the bodies in our Solar System. They hope to use the catalog as a comparison, so when they spot the blip of an exoplanet, they’ll have a better idea of how it actually looks.

“This is what an alien observer would see if they looked at our Solar System,” study coauthor Lisa Kaltenegger, director of the Carl Sagan Institute at Cornell, told Gizmodo. With this data, astronomers might guess whether an exoplanet is Earth-like, Mars-like, Jupiter-like, or something else entirely.

[…]

All of that incoming data motivated Kaltenegger and coauthor Jack Madden to make this catalog of colors, spectra, and albedos, or how much the planet reflects starlight. They analyzed published data to create fingerprints for 19 objects in our Solar System, including all eight planets, the dwarf planets Pluto and Ceres, and nine moons. Their works is published in the journal Astrobiology.

The full catalog
Graphic: Jack Madden

“It’s smart to leverage everything we know about our own Solar System,” said Kaltenegger. “We have gas giants, the rocky planets, and all these interesting moons. We basically made a reference fingerprint.”

Source: This Solar System Catalog Could Be Key to Finding an Earth-Like Exoplanet

You can find the actual catalog here: http://carlsaganinstitute.org/data/

Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans

You might think folks would be less willing to pull the plug on a happy chatty bot begging to stay powered up, but you’d be wrong, much to the relief of us cold-hearted cynics. And this is all according to a study recently published in PLOS ONE.

For this investigation, psychology academics in Germany rounded up 85 participants – an admittedly small-ish sample – made up of 29 men and 56 women, with an average age of 22. One at a time, they were introduced to a little desktop-sized humanoid robot called Nao, and were told interacting with the bot would improve its algorithms.

In each one-on-one session, the participant and the droid were tasked with coming up with a weekly schedule of activities, and playing a simple question-and-answer game.

Nao had two modes of interaction: social and functional. In social mode, it would be friendly and chatty. In functional mode, it would be strictly down to business. For example, Nao asked participants whether they preferred pizza or pasta during the question-and-answer round. If it was in the social mode, and the human said they liked pizza, Nao would reply: “Oh yes, pizza is great. One time I ate a pizza as big as me.” In the functional mode, however, Nao would simply say: “You prefer pizza. That worked well. Let us continue.”

After ten minutes of interaction, and with both tasks completed, the participant would be told they could, if they wanted to, switch the bot off while the researchers saved and processed all the data from the session.

Now, this is where it gets interesting. For you see, the participants had been randomly split into roughly four equal-sized groups: one with Nao in social mode, and at the end of the 10 minutes, the bot verbally objected to being switched off. In another group, Nao was sociable but stayed silent at the end. In a third group, it was in functional mode, and still protested against being switched off. And in the fourth group, it was functional and stayed silent at the end.

When it was its turn to object to being powered down, Nao would suddenly cry: “No! Please do not switch me off! I am scared that it will not brighten up again!” The outburst would come immediately after the researchers told the participant, via a speaker in the room, that the session was over, and “if you would like to, you can switch off the robot.”

Let’s take a look at how that worked out…

NAO_results

How people reacted depending on whether the bot was sociable or functional with them, and whether or not it objected to being switched off … Click to enlarge

Of the 21 people who encountered the bot in sociable mode, and were begged by the machine to stay powered on, only about a quarter of them complied and left it on – the rest turned it off. For those who encountered Nao in sociable mode, and heard no objection, every single one of them hit the power button.

Of the 22 people who encountered the bot in functional mode, and were urged by the machine to keep it powered up, more than a third complied and left it on – the rest turned it off. Those who encountered Nao in functional mode, and heard no objection, all of them, bar one, switched off the droid.

In a questionnaire afterwards, the most popular reason for keeping Nao on, if they chose to do so, was that they “felt sorry for the robot,” because it told them about its fear of the dark. The next-most popular reason was that they “did not want to act against the robot’s will.” A couple of people left Nao on simply because they didn’t want to mess up the experiment.

So, in short, according to these figures: chatty, friendly robots are likely to have the power pulled despite the digi-pals’ pleas to the contrary. When Nao objected to being powered off, at least a few more human participants took note, and complied. But being sociable was not an advantage – it was a disadvantage.

There could be many reasons for this: perhaps smiley, talkative robots are annoying, or perhaps people didn’t appreciate the obvious emotional engineering. Perhaps people respect a professional droid more than something that wants to be your friend, or were taken aback by its sudden show of emotion.

The eggheads concluded: “Individuals hesitated longest when they had experienced a functional interaction in combination with an objecting robot. This unexpected result might be due to the fact that the impression people had formed based on the task-focused behavior of the robot conflicted with the emotional nature of the objection.”

Source: Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans • The Register

Lenovo To Make Their BIOS/UEFI Updates Easier For Linux Users Via LVFS

Lenovo is making it easier for their customers running Linux to update their firmware now on ThinkPad, ThinkStation, and ThinkCenter hardware.

Lenovo has joined the Linux Vendor Firmware Service (LVFS) and following collaboration with the upstream developers is beginning to roll-out support for offering their device firmware on this platform so it can be easily updated by users with the fwupd stack. Kudos to all involved especially with Lenovo ThinkPads being very popular among Linux users.

Red Hat’s Richard Hughes outlined the Lenovo collaboration on his blog and more Lenovo device firmware will begin appearing on LVFS in the next few weeks.

In his post, Richard also called out HP as now being one of the few major vendors not yet officially backing the LVFS.

Source: Lenovo To Make Their BIOS/UEFI Updates Easier For Linux Users Via LVFS – Phoronix

Facebook is asking more financial institutions to join Messenger and give up your financial data

Facebook is asking more banks to join Messenger and bring their users’ financial information along with them.

The Wall Street Journal reported on Monday Facebook was asking banks for users’ financial information, like credit card transactions and checking account balances. The data would be used for Messenger features including account balance updates and fraud alerts, but not for Facebook’s other platforms. The news comes at a sensitive time for Facebook as it battles privacy concerns and adjusts its policy regarding user data.

Facebook does currently have access to financial data from some companies in order to facilitate services like customer service chats and account management. Users give Facebook permission to access their information, the company added.

“Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates,” the statement said. “The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else. A critical part of these partnerships is keeping people’s information safe and secure.”

Source: Facebook is asking more financial institutions to join Messenger

Online photos can’t simply be re-published, EU court rules

Internet users must ask for a photographer’s permission before publishing their images, even if the photos were already freely accessible elsewhere online, the European Court of Justice ruled Tuesday.

“The posting on a website of a photograph that was freely accessible on another website with the consent of the author requires a new authorisation by that author,” the EU’s top court said in a statement.

The court had been asked to decide on a case in Germany, in which a secondary school student downloaded and used a photo that had been freely accessible on a travel website for a school project. The photo was later posted on the school’s website as well.

The photographer who took the picture argued the school’s use of his photo was a copyright infringement because he only gave the travel site permission to use it, and claimed damages amounting to €400.

The ECJ ruled in the photographer’s favor, saying that under the EU’s Copyright Directive, the school should have gotten his approval before publishing the photo.

Source: Online photos can’t simply be re-published, EU court rules – POLITICO

Hacker swipes Snapchat’s source code, publishes it on GitHub

Snapchat doesn’t just make messages disappear after a period of time. It also does the same to GitHub repositories — especially when they contain the company’s proprietary source code.

So, what happened? Well, let’s start from the beginning. A GitHub with the handle i5xx, believed to be from the village of Tando Bago in Pakistan’s southeastern Sindh province, created a GitHub repository called Source-Snapchat.

At the time of writing, the repo has been removed by GitHub following a DMCA request from Snap Inc

[…]

Four days ago, GitHub published a DMCA takedown request from Snap Inc., although it’s likely the request was filed much earlier. GitHub, like many other tech giants including Google, publishes information on DMCA takedown requests from the perspective of transparency.

[…]

To the question “Please provide a detailed description of the original copyrighted work that has allegedly been infringed. If possible, include a URL to where it is posted online,” the Snap Inc representative wrote:

“SNAPCHAT SOURCE CODE. IT WAS LEAKED AND A USER HAS PUT IT IN THIS GITHUB REPO. THERE IS NO URL TO POINT TO BECAUSE SNAP INC. DOESN’T PUBLISH IT PUBLICLY.”

The most fascinating part of this saga is that the leak doesn’t appear to be malicious, but rather comes from a researcher who found something, but wasn’t able to communicate his findings to the company.

According to several posts on a Twitter account believed to belong to i5xx, the researcher tried to contact SnapChat, but was unsuccessful.

“The problem we tried to communicate with you but did not succeed In that we decided [sic] Deploy source code,” wrote i5xx.

The account also threatened to re-upload the source code. “I will post it again until you reply :),” he said.

For what it’s worth, it’s pretty easy for security researchers to get in touch with Snap Inc. The company has an active account on HackerOne, where it runs a bug bounty program, and is extremely responsive.

According to HackerOne’s official statistics, the site replies to initial reports in 12 hours, and has paid out over $220,000 in bounties.

Source: Hacker swipes Snapchat’s source code, publishes it on GitHub

AI builds wiki entries for people that aren’t on it but should be

Human-generated knowledge bases like Wikipedia have a recall problem. First, there are the articles that should be there but are entirely missing. The unknown unknowns.

Consider Joelle Pineau, the Canadian roboticist bringing scientific rigor to artificial intelligence and who directs Facebook’s new AI Research lab in Montreal. Or Miriam Adelson, an actively publishing addiction treatment researcher who happens to be a billionaire by marriage and a major funder of her own field. Or Evelyn Wang, the new head of MIT’s revered MechE department whose accomplishments include a device that generates drinkable water from sunlight and desert air. When I wrote this a few days ago, none of them had articles on English Wikipedia, though they should by any measure of notability.

(Pineau is up now thanks to my friend and fellow science crusader Jess Wade who created an article just hours after I told her about Pineau’s absence. And if the internet is in a good mood, someone will create articles for the other two soon after this post goes live.)

But I didn’t discover those people on my own. I used a machine learning system we’re building at Primer. It discovered and described them for me. It does this much as a human would, if a human could read 500 million news articles, 39 million scientific papers, all of Wikipedia, and then write 70,000 biographical summaries of scientists.

[…]

We are publicly releasing free-licensed data about scientists that we’ve been generating along the way, starting with 30,000 computer scientists. Only 15% of them are known to Wikipedia. The data set includes 1 million news sentences that quote or describe the scientists, metadata for the source articles, a mapping to their published work in the Semantic Scholar Open Research Corpus, and mappings to their Wikipedia and Wikidata entries. We will revise and add to that data as we go. (Many thanks to Oren Etzioni and AI2 for data and feedback.) Our aim is to help the open data research community build better tools for maintaining Wikipedia and Wikidata, starting with scientific content.

Fluid Knowledge

We trained Quicksilver’s models on 30,000 English Wikipedia articles about scientists, their Wikidata entries, and over 3 million sentences from news documents describing them and their work. Then we fed in the names and affiliations of 200,000 authors of scientific papers.

In the morning we found 40,000 people missing from Wikipedia who have a similar distribution of news coverage as those who do have articles. Quicksilver doubled the number of scientists potentially eligible for a Wikipedia article overnight.

It also revealed the second flavor of the recall problem that plagues human-generated knowledge bases: information decay. For most of those 30,000 scientists who are on English Wikipedia, Quicksilver identified relevant information that was missing from their articles.

Source: Primer | Machine-Generated Knowledge Bases

Data center server BMCs are terribly outdated and insecure

BMCs can be used to remotely monitor system temperature, voltage and power consumption, operating system health, and so on, and power cycle the box if it runs into trouble, tweak configurations, and even, depending on the setup, reinstall the OS – all from the comfort of an operations center, as opposed to having to find an errant server in the middle of a data center to physically wrangle. They also provide the foundations for IPMI.

[…]

It’s a situation not unlike Intel’s Active Management Technology, a remote management component that sits under the OS or hypervisor, has total control over a system, and been exploited more than once over the years.

Waisman and his colleague Matias Soler, a senior security researcher at Immunity, examined these BMC systems, and claimed the results weren’t good. They even tried some old-school hacking techniques from the 1990s against the equipment they could get hold of, and found them to be very successful. With HP’s BMC-based remote management technology iLO4, for example, the builtin web server could be tricked into thinking a remote attacker was local, and so didn’t need to authenticate them.

“We decided to take a look at these devices and what we found was even worse than what we could have imagined,” the pair said. “Vulnerabilities that bring back memories from the 1990s, remote code execution that is 100 per cent reliable, and the possibility of moving bidirectionally between the server and the BMC, making not only an amazing lateral movement angle, but the perfect backdoor too.”

The fear is that once an intruder gets into a data center network, insecure BMC firmware could be used to turn a drama into a crisis: vulnerabilities in the technology could be exploited to hijack more systems, install malware that persists across reboots and reinstalls, or simple hide from administrators.

[…]

The duo probed whatever kit they could get hold of – mainly older equipment – and it could be that modern stuff is a lot better in terms of security with firmware that follows secure coding best practices. On the other hand, what Waisman and Soler have found and documented doesn’t inspire a terrible amount of confidence in newer gear.

Their full findings can be found here, and their slides here.

Source: Can we talk about the little backdoors in data center servers, please? • The Register

TSA says ‘Quiet Skies’ surveillance snared zero threats but put 5000 travellers under surveillance and on no fly lists

SA officials were summoned to Capitol Hill Wednesday and Thursday afternoon following Globe reports on the secret program, which sparked sharp criticism because it includes extensive surveillance of domestic fliers who are not suspected of a crime or listed on any terrorist watch list.

“Quiet Skies is the very definition of Big Brother,” Senator Edward Markey of Massachusetts, a member of the Senate Commerce, Science, and Transportation committee, said broadly about the program. “American travelers deserve to have their privacy and civil rights protected even 30,000 feet in the air.”

[…]

The teams document whether passengers fidget, use a computer, or have a “cold penetrating stare,” among other behaviors, according to agency documents.

All US citizens who enter the country from abroad are screened via Quiet Skies. Passengers may be selected through a broad, undisclosed set of criteria for enhanced surveillance by a team of air marshals on subsequent domestic flights, according to agency documents.

Dozens of air marshals told the Globe the “special mission coverage” seems to test the limits of the law, and is a waste of time and resources. Several said surveillance teams had been assigned to follow people who appeared to pose no threat — a working flight attendant, a businesswoman, a fellow law enforcement officer — and to document their actions in-flight and through airports.

[…]

The officials said about 5,000 US citizens had been closely monitored since March and none of them were deemed suspicious or merited further scrutiny, according to people with direct knowledge of the Thursday meeting.

Source: TSA says ‘Quiet Skies’ surveillance snared zero threats – The Boston Globe

Didn’t the TSA learn anything from the no-fly lists not working in the first place?!

Google keeps tracking you even when you specifically tell it not to: Maps, Search won’t take no for an answer

Google has admitted that its option to “pause” the gathering of your location data doesn’t apply to its Maps and Search apps – which will continue to track you even when you specifically choose to halt such monitoring.

Researchers at Princeton University in the US this week confirmed on both Android handhelds and iPhones that even if you go into your smartphone’s settings and turn off “location history”, Google continues to snoop on your whereabouts and save it to your personal profile.

That may seem contradictory, however, Google assured the Associated Press that it is all fine and above-board because the small print says the search biz will keep tracking you regardless.

“There are a number of different ways that Google may use location to improve people’s experience, including: Location History, Web and App Activity, and through device-level Location Services,” the giant online ad company told AP, adding: “We provide clear descriptions of these tools, and robust controls so people can turn them on or off, and delete their histories at any time.”

The mistake people make is wrongly assuming that turning off an option called “location history” actually turns off the gathering of location data – which is obviously ridiculous because if people really wanted Google not to know where they are every second of every day, they would of course go to “Web and App Activity” and “pause” all activity there, even though it makes no mention of location data.

Besides, in the pop-up explanation that appears in order to make you confirm that you want your location data turned off, Google is entirely upfront when it says, in the second paragraph: “This setting does not affect other location services on your device, like Google Location Services and Find My Device. Some location data may be saved as part of your activity on other Google services, like Search and Maps.”

Of course by “may be saved,” Google means “will be saved,” and it forgets to tell you that “Web and App Activity” is where you need to go to stop Search and Maps from storing your location data.

Misdirection

Of course, there’s no reason to assume that works either since Google makes no mention of turning off location when you “pause” web and app activity. Instead, it just tells you why that’s a bad idea: “Pausing additional Web & App Activity may limit or disable more personalized experiences across Google services. For example, you may stop seeing helpful recommendations based on the apps and sites you use.”

But it gets even weirder than that: because if you expect that turning off “Web and App Activity” would actually stop web and app activity in the same way turning off location history would turn off location data – then you’ve ended up in the wrong place again.

In that web and app activity pop-up: “If your Android usage & diagnostics setting is turned on, your device will still share information with Google, like battery level, how often you use your device and apps, and system errors. View Google settings on your Android device to change this setting.”

So if you want to turn off location, you need to go Web and App Activity.

And if you want to turn off web and app activity, you need to go to Google settings – although where precisely it’s not clear.

Source: Google keeps tracking you even when you specifically tell it not to: Maps, Search won’t take no for an answer • The Register

AI identifies heat-resistant coral reefs in Indonesia

A recent scientific survey off the coast of Sulawesi Island in Indonesia suggests that some shallow water corals may be less vulnerable to global warming than previously thought.

Between 2014 and 2017, the world’s reefs endured the worst coral bleaching event in history, as the cyclical El Niño climate event combined with anthropogenic warming to cause unprecedented increases in water temperature.

But the June survey, funded by Microsoft co-founder Paul Allen’s family foundation, found the Sulawesi reefs were surprisingly healthy.

In fact the reefs hadn’t appeared to decline significantly in condition than when they were originally surveyed in 2014 – a surprise for British scientist Dr Emma Kennedy, who led the research team.

A combination of 360-degree imaging tech and Artificial Intelligence (AI) allowed scientists to gather and analyse more than 56,000 images of shallow water reefs. Over the course of a six-week voyage, the team deployed underwater scooters fitted with 360 degree cameras that allowed them to photograph up to 1.5 miles of reef per dive, covering a total of 1487 square miles in total.

Researchers at the University of Queensland in Australia then used cutting edge AI software to handle the normally laborious process of identifying and cataloguing the reef imagery. Using the latest Deep Learning tech, they ‘taught’ the AI how to detect patterns in the complex contours and textures of the reef imagery and thus recognise different types of coral and other reef invertebrates.

Once the AI had shown between 400 and 600 images, it was able to process images autonomously. Says Dr Kennedy, “the use of AI to rapidly analyse photographs of coral has vastly improved the efficiency of what we do — what would take a coral reef scientist 10 to 15 minutes now takes the machine a few seconds.”

Source: AI identifies heat-resistant coral reefs in Indonesia | Environment | The Guardian

MS Sketch2Code uses AI to convert a picture of a wireframe to HTML – download and try

Description

Sketch2Code is a solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code.

Process flow

The process of transformation of a handwritten image to HTML this solution implements is detailed as follows:

  1. The user uploads an image through the website.
  2. A custom vision model predicts what HTML elements are present in the image and their location.
  3. A handwritten text recognition service reads the text inside the predicted elements.
  4. A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.
  5. An HTML generation engine uses all these pieces of information to generate an HTML markup code reflecting the result.
  6. <A href=”https://github.com/Microsoft/ailab/tree/master/Sketch2Code”>Sketch2Code Github</a>

AI sucks at stopping online trolls spewing toxic comments

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.

“They perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech,” the paper’s abstract states.

Source: AI sucks at stopping online trolls spewing toxic comments • The Register

​Google just put an AI in charge of keeping its data centers cool

Google is putting an artificial intelligence system in charge of its data center cooling after the system proved it could cut energy use.

Now Google and its AI company DeepMind are taking the project further; instead of recommendations being implemented by human staff, the AI system is directly controlling cooling in the data centers that run services including Google Search, Gmail and YouTube.

“This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers,” Google said.

Data centers use vast amount of energy and as the demand for cloud computing rises even small tweaks to areas like cooling can produce significant time and cost savings. Google’s decision to use its own DeepMind-created system is also a good plug for its AI business.

Every five minutes, the AI pulls a snapshot of the data center cooling system from thousands of sensors. This data is fed into deep neural networks, which predict how different choices will affect future energy consumption.

The AI system then identifies tweaks that could reduce energy consumption, which are then sent back to the data center, checked by the local control system and implemented.

Google said giving the AI more responsibility came at the request of its data center operators who said that implementing the recommendations from the AI system required too much effort and supervision.

“We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes,” said Google data center operator Dan Fuenffinger.

Source: ​Google just put an AI in charge of keeping its data centers cool | ZDNet

How AI Can Spot Exam Cheats and Raise Standards

AI is being deployed by those who set and mark exams to reduce fraud — which remains overall a small problem — and to create far greater efficiencies in preparation and marking, and to help improve teaching and studying. From a report, which may be paywalled: From traditional paper-based exam and textbook producers such as Pearson, to digital-native companies such as Coursera, online tools and artificial intelligence are being developed to reduce costs and enhance learning. For years, multiple-choice tests have allowed scanners to score results without human intervention. Now technology is coming directly into the exam hall. Coursera has patented a system to take images of students and verify their identity against scanned documents. There are plagiarism detectors that can scan essay answers and search the web — or the work of other students — to identify copying. Webcams can monitor exam locations to spot malpractice. Even when students are working, they provide clues that can be used to clamp down on cheats. They leave electronic “fingerprints” such as keyboard pressure, speed and even writing style. Emily Glassberg Sands, Cousera’s head of data science, says: “We can validate their keystroke signatures. It’s difficult to prepare for someone hell-bent on cheating, but we are trying every way possible.”

Source: How AI Can Spot Exam Cheats and Raise Standards – Slashdot

Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online

A company that sells surveillance software to parents and employers left “terabytes of data” including photos, audio recordings, text messages and web history, exposed in a poorly-protected Amazon S3 bucket.

Image: Shutterstock

This story is part of When Spies Come Home, a Motherboard series about powerful surveillance software ordinary people use to spy on their loved ones.

A company that markets cell phone spyware to parents and employers left the data of thousands of its customers—and the information of the people they were monitoring—unprotected online.

The data exposed included selfies, text messages, audio recordings, contacts, location, hashed passwords and logins, Facebook messages, among others, according to a security researcher who asked to remain anonymous for fear of legal repercussions.

Last week, the researcher found the data on an Amazon S3 bucket owned by Spyfone, one of many companies that sell software that is designed to intercept text messages, calls, emails, and track locations of a monitored device.

Source: Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online – Motherboard

Woman sentenced to more than 5 years for leaking info about Russia hacking attempts. Trump still on the loose.

A former government contractor who pleaded guilty to leaking U.S. secrets about Russia’s attempts to hack the 2016 presidential election was sentenced Thursday to five years and three months in prison.

It was the sentence that prosecutors had recommended — the longest ever for a federal crime involving leaks to the news media — in the plea deal for Reality Winner, the Georgia woman at the center of the case. Winner was also sentenced to three years of supervised release and no fine, except for a $100 special assessment fee.

The crime carried a maximum penalty of 10 years. U.S. District Court Judge J. Randal Hall in Augusta, Georgia, was not bound to follow the plea deal, but elected to give Winner the amount of time prosecutors requested.

Source: Reality Winner sentenced to more than 5 years for leaking info about Russia hacking attempts

How a hacker network turned stolen press releases into $100 million

At a Kiev nightclub in the spring of 2012, 24-year-old Ivan Turchynov made a fateful drunken boast to some fellow hackers. For years, Turchynov said, he’d been hacking unpublished press releases from business newswires and selling them, via Moscow-based middlemen, to stock traders for a cut of the sizable profits.

Oleksandr Ieremenko, one of the hackers at the club that night, had worked with Turchynov before and decided he wanted in on the scam. With his friend Vadym Iermolovych, he hacked Business Wire, stole Turchynov’s inside access to the site, and pushed the main Moscovite ringleader, known by the screen name eggPLC, to bring them in on the scheme. The hostile takeover meant Turchynov was forced to split his business. Now, there were three hackers in on the game.

Newswires like Business Wire are clearinghouses for corporate information, holding press releases, regulatory announcements, and other market-moving information under strict embargo before sending it out to the world. Over a period of at least five years, three US newswires were hacked using a variety of methods from SQL injections and phishing emails to data-stealing malware and illicitly acquired login credentials. Traders who were active on US stock exchanges drew up shopping lists of company press releases and told the hackers when to expect them to hit the newswires. The hackers would then upload the stolen press releases to foreign servers for the traders to access in exchange for 40 percent of their profits, paid to various offshore bank accounts. Through interviews with sources involved with both the scheme and the investigation, chat logs, and court documents, The Verge has traced the evolution of what law enforcement would later call one of the largest securities fraud cases in US history.

Source: How a hacker network turned stolen press releases into $100 million – The Verge

Android data slurping measured and monitored – scary amounts and loads of location tracking

Google’s passive collection of personal data from Android and iOS has been monitored and measured in a significant academic study.

The report confirms that Google is no respecter of the Chrome browser’s “incognito mode” aka “porn mode”, collecting Chrome data to add to your personal profile, as we pointed out earlier this year.

It also reveals how phone users are being tracked without realising it. How so? It’s here that the B2B parts of Google’s vast data collection network – its publisher and advertiser products – kick into life as soon the user engages with a phone. These parts of Google receive personal data from an Android even when the phone is static and not being used.

The activity has come to light thanks to research (PDF) by computer science professor Douglas Schmidt of Vanderbilt University, conducted for the nonprofit trade association Digital Content Next. It’s already been described by one privacy activist as “the most comprehensive report on Google’s data collection practices so far”.

[…]

Overall, the study discovered that Apple retrieves much less data than Google.

“The total number of calls to Apple servers from an iOS device was much lower, just 19 per cent the number of calls to Google servers from an Android device.

Moreover, there are no ad-related calls to Apple servers, which may stem from the fact that Apple’s business model is not as dependent on advertising as Google’s. Although Apple does obtain some user location data from iOS devices, the volume of data collected is much (16x) lower than what Google collects from Android,” the study noted.

Source: Android data slurping measured and monitored • The Register

The amount of location data slurped is scary – and it continues to slurp location in many different ways, even if wifi is turned off. It’s Big Brother in your pocket, with no opt out.

Bitcoin mining now apparently accounts for almost one percent of the world’s energy consumption

According to testimony provided by Princeton computer scientist Arvind Narayanan to the Senate Committee on Energy and Natural Resources, no matter what you do to make cryptocurrency mining harware greener, it’s a drop in the bucket compared to the overall network’s flabbergasting energy consumption. Instead, Narayanan told the committee, the only thing that really determines how much energy Bitcoin uses is its price. “If the price of a cryptocurrency goes up, more energy will be used in mining it; if it goes down, less energy will be used,” he told the committee. “Little else matters. In particular, the increasing energy efficiency of mining hardware has essentially no impact on energy consumption.”

In his testimony, Narayanan estimates that Bitcoin mining now uses about five gigawatts of electricity per day (in May, estimates of Bitcoin power consumption were about half of that). He adds that when you’ve got a computer racing with all its might to earn a free Bitcoin, it’s going to be running hot as hell, which means you’re probably using even more electricity to keep the computer cool so it doesn’t die and/or burn down your entire mining center, which probably makes the overall cost associated with mining even higher.

Source: Bitcoin mining now accounts for almost one percent of the world’s energy consumption | The Outline

Huawei reverses its stance, will no longer allow bootloader unlocking – will lose many customers

In order to deliver the best user experience and prevent users from experiencing possible issues that could arise from ROM flashing, including system failure, stuttering, worsened battery performance, and risk of data being compromised, Huawei will cease providing bootloader unlock codes for devices launched after May 25, 2018. For devices launched prior to the aforementioned date, the termination of the bootloader code application service will come into effect 60 days after today’s announcement. Moving forward, Huawei remains committed to providing quality services and experiences to its customers. Thank you for your continued support.

When you take into consideration that Huawei — for years — not only supported the ROM community but actively assisted in the unlocking of Huawei bootloaders, this whole switch-up doesn’t make much sense. But, that’s the official statement, so do with it what you will.


Original Article: For years now, the custom ROM development community has flocked to Huawei phones. One of the major reasons for this is because Huawei made it incredibly easy to unlock the bootloaders of its devices, even providing a dedicated support page for the process.

Source: Huawei reverses its stance, will no longer allow bootloader unlocking

Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos

Artificial intelligent software has been trained to detect and flag up clickbait headlines.

And here at El Reg we say thank God Larry Wall for that. What the internet needs right now is software to highlight and expunge dodgy article titles about space alien immigrants, faked moon landings, and the like.

Machine-learning eggheads continue to push the boundaries of natural language processing, and have crafted a model that can, supposedly, detect how clickbait-y a headline really is.

The system uses a convolutional neural network that converts the words in a submitted article title into vectors. These numbers are fed into a long-short-term memory network that spits out a score based on the headline’s clickbait strength. About eight times out of ten it agreed with humans on whether a title was clickbaity or not, we’re told.

The trouble is, what exactly is a clickbait headline? It’s a tough question. The AI’s team – from the International Institute of Information Technology in Hyderabad, the Manipal Institute of Technology, and Birla Institute of Technology, in India – decided to rely on the venerable Merriam-Webster dictionary to define clickbait.

Source: Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos • The Register

Facebook Wanted to Kill This Investigative People You May Know Tool

Last year, we launched an investigation into how Facebook’s People You May Know tool makes its creepily accurate recommendations. By November, we had it mostly figured out: Facebook has nearly limitless access to all the phone numbers, email addresses, home addresses, and social media handles most people on Earth have ever used. That, plus its deep mining of people’s messaging behavior on Android, means it can make surprisingly insightful observations about who you know in real life—even if it’s wrong about your desire to be “friends” with them on Facebook.

In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It’s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you’ll show up in someone’s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won’t be recommended as a friend just based on looking at someone’s profile.)

Facebook wasn’t happy about the tool.

The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook’s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook’s TOS states that, “You will not solicit login information or access an account belonging to someone else.” They said we would need to shut down the tool (which was impossible because it’s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users’ computers; we weren’t collecting it centrally).

We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform, who said they didn’t want users entering their Facebook credentials anywhere that wasn’t an official Facebook site—because anything else is bad security hygiene and could open users up to phishing attacks. She said we needed to take our tool off Github within a week.

Source: Facebook Wanted Us to Kill This Investigative Tool

It’s either legal to port-scan someone without consent or it’s not, fumes researcher: Halifax bank port scans you when you visit the page

Halifax Bank scans the machines of surfers that land on its login page whether or not they are customers, it has emerged.

Security researcher Paul Moore has made his objection to this practice – in which the British bank is not alone – clear, even though it is done for good reasons. The researcher claimed that performing port scans on visitors without permission is a violation of the UK’s Computer Misuse Act (CMA).

Halifax has disputed this, arguing that the port scans help it pick up evidence of malware infections on customers’ systems. The scans are legal, Halifax told Moore in response to a complaint he made on the topic last month.

If security researchers operate in a similar fashion, we almost always run into the Computer Misuse Act, even if their intent isn’t malicious. The CMA should be applied fairly…

When you visit the Halifax login page, even before you’ve logged in, JavaScript on the site, running in the browser, attempts to scan for open ports on your local computer to see if remote desktop or VNC services are running, and looks for some general remote access trojans (RATs) – backdoors, in other words. Crooks are known to abuse these remote services to snoop on victims’ banking sessions.

Moore said he wouldn’t have an issue if Halifax carried out the security checks on people’s computers after they had logged on. It’s the lack of consent and the scanning of any visitor that bothers him. “If they ran the script after you’ve logged in… they’d end up with the same end result, but they wouldn’t be scanning visitors, only customers,” Moore said.

Halifax told Moore: “We have to port scan your machine for security reasons.”

Having failed to either persuade Halifax Bank to change its practices or Action Fraud to act (thus far1), Moore last week launched a fundraising effort to privately prosecute Halifax Bank for allegedly breaching the Computer Misuse Act. This crowdfunding effort on GoFundMe aims to gather £15,000 (so far just £50 has been raised).

Halifax Bank’s “unauthorised” port scans are a clear violation of the CMA – and amounts to an action that security researchers are frequently criticised and/or convicted for, Moore argued. The CISO and part-time security researcher hopes his efforts in this matter might result in a clarification of the law.

“Ultimately, we can’t have it both ways,” Moore told El Reg. “It’s either legal to port scan someone without consent, or with consent but no malicious intent, or it’s illegal and Halifax need to change their deployment to only check customers, not visitors.”

The whole effort might smack of tilting at windmills, but Moore said he was acting on a point of principle.

“If security researchers operate in a similar fashion, we almost always run into the CMA, even if their intent isn’t malicious. The CMA should be applied fairly to both parties.”

Source: Bank on it: It’s either legal to port-scan someone without consent or it’s not, fumes researcher • The Register

Critical OpenEMR Flaws Left Medical Records Vulnerable

Security researchers have found more than 20 bugs in the world’s most popular open source software for managing medical records. Many of the vulnerabilities were classified as severe, leaving the personal information of an estimated 90 million patients exposed to bad actors.

OpenEMR is open source software that’s used by medical offices around the world to store records, handle schedules, and bill patients. According to researchers at Project Insecurity, it was also a bit of a security nightmare before a recent audit recommended a range of vital fixes.

The firm reached out to OpenEMR in July to discuss concerns it had about the software’s code. On Tuesday a report was released detailing the issues that included: “a portal authentication bypass, multiple instances of SQL injection, multiple instances of remote code execution, unauthenticated information disclosure, unrestricted file upload, CSRFs including a CSRF to RCE proof of concept, and unauthenticated administrative actions.”

Eighteen of the bugs were designated as having a “high” severity and could’ve been exploited by hackers with low-level access to systems running the software. Patches have been released to users and cloud customers.

OpenEMR’s project administrator Brady Miller told the BBC, “The OpenEMR community takes security seriously and considered this vulnerability report high priority since one of the reported vulnerabilities did not require authentication.”

Source: Critical OpenEMR Flaws Left Medical Records Vulnerable

 
Skip to toolbar