Azure, Office 365 go super-secure: Multi-factor auth borked in Europe, Asia, USA – > 6 hour outage from MS – yay!

Happy Monday, everyone! Azure Multi-Factor Authentication is struggling, meaning that some users with the functionality enabled are now super secure. And, er, locked out.

Microsoft confirmed that there were problems from 04:39 UTC with a subset of customers in Europe, the Americas, and Asia-Pacific experiencing “difficulties signing into Azure resources” such as the, er, little used Azure Active Directory, when Multi-Factor Authentication (MFA) is enabled.

Six hours later, and the problems are continuing.

The Office 365 health status page has reported that: “Affected users may be unable to sign in using MFA” and Azure’s own status page confirmed that there are “issues connecting to Azure resources” thanks to the borked MFA.

Source: Azure, Office 365 go super-secure: Multi-factor auth borked in Europe, Asia, USA • The Register

Cloud!

Dutch Gov sees Office 365 spying on you, sending your texts to US servers without recourse or knowledge

Uit het rapport van de Nederlandse overheid blijkt dat de telemetrie-functie van alle Office 365 en Office ProPlus-applicaties onder andere e-mail-onderwerpen en woorden/zinnen die met behulp van de spellingschecker of vertaalfunctie zijn geschreven worden doorgestuurd naar systemen in de Verenigde Staten.

Dit gaat zelfs zo ver dat, als een gebruiker meerdere keren achter elkaar op de backspace-knop drukt, de telemetrie-functie zowel de zin voor de aanpassing al die daarna verzamelt en doorstuurt. Gebruikers worden hiervan niet op de hoogte gebracht en hebben geen mogelijkheid deze dataverzameling te stoppen of de verzamelde data in te zien.

De Rijksoverheid heeft dit onderzoek gedaan in samenwerking met Privacy Company. “Microsoft mag deze tijdelijke, functionele gegevens niet opslaan, tenzij de bewaring strikt noodzakelijk is, bijvoorbeeld voor veiligheidsdoeleinden,” schrijft Sjoera Nas van de Privacy Company in een blogpost.

Source: Je wordt bespied door Office 365-applicaties – Webwereld

LastPass Five-hour outage drives netizens bonkers

LastPass’s cloud service suffered a five-hour outage today that left some people unable to use the password manager to log into their internet accounts.

Its makers said offline mode wasn’t affected – and that only its cloud-based password storage fell offline – although some Twitter folks disagreed. One claimed to be unable to log into any accounts whether in “local or remote” mode of the password manager, while another couldn’t access their local vault.

The solution, apparently, was to disconnect from the network. That forced LastPass to use account passwords cached on the local machine, rather than pull down credentials from its cloud-hosted password vaults. Folks store login details remotely using LastPass so they can be used and synchronized across multiple devices, backed up in the cloud, shared securely with colleagues, and so on.

The problems first emerged at 1408 UTC on November 20, with netizens reporting an “intermittent connectivity issue” when trying to use LastPass to fill in their passwords to log into their internet accounts. Unlucky punters were, therefore, unable to get into their accounts because LastPass couldn’t cough up the necessary passwords from its cloud.

The software’s net admins worked fast, according to the organisation’s status page. Within seven minutes of trouble, the outfit posted: “The Network Operations Center have identified the issue and are working to resolve the issue.”

The biz also reassured users that there was no security vulnerability, exploit, nor hack attack involved:

Connectivity is a recurrent theme in LastPass outages: in May, LogMeIn, the developers behind LastPass, suffered a DNS error in the UK that locked Blighty out of the service.

The service returned at nearly 2000 UTC today, when the status team posted: “We have confirmed that internal tests are working fine and LastPass is operational. We are continuing to monitor the situation to ensure there are no further issues.”

Source: LastPass? More like lost pass. Or where the fsck has it gone pass. Five-hour outage drives netizens bonkers • The Register

Cloud!

Human images from world’s first total-body scanner unveiled

EXPLORER, the world’s first medical imaging scanner that can capture a 3-D picture of the whole human body at once, has produced its first scans.

The brainchild of UC Davis scientists Simon Cherry and Ramsey Badawi, EXPLORER is a combined (PET) and X-ray computed tomography (CT) that can image the entire body at the same time. Because the machine captures radiation far more efficiently than other scanners, EXPLORER can produce an image in as little as one second and, over time, produce movies that can track specially tagged drugs as they move around the entire body.

The developers expect the technology will have countless applications, from improving diagnostics to tracking disease progression to researching new drug therapies.

The first images from scans of humans using the new device will be shown at the upcoming Radiological Society of North America meeting, which starts on Nov. 24th in Chicago. The scanner has been developed in partnership with Shanghai-based United Imaging Healthcare (UIH), which built the system based on its latest technology platform and will eventually manufacture the devices for the broader healthcare market.

“While I had imagined what the images would look like for years, nothing prepared me for the incredible detail we could see on that first scan,” said Cherry, distinguished professor in the UC Davis Department of Biomedical Engineering. “While there is still a lot of careful analysis to do, I think we already know that EXPLORER is delivering roughly what we had promised.

EXPLORER image showing glucose metabolism throughout the entire human body. This is the first time a medical imaging scanner has been able to capture a 3D image of the entire human body simultaneously. Credit: UC Davis and Zhongshan Hospital, Shanghai

Badawi, chief of Nuclear Medicine at UC Davis Health and vice-chair for research in the Department of Radiology, said he was dumbfounded when he saw the first images, which were acquired in collaboration with UIH and the Department of Nuclear Medicine at the Zhongshan Hospital in Shanghai.

“The level of detail was astonishing, especially once we got the reconstruction method a bit more optimized,” he said. “We could see features that you just don’t see on regular PET scans. And the dynamic sequence showing the radiotracer moving around the body in three dimensions over time was, frankly, mind-blowing. There is no other device that can obtain data like this in humans, so this is truly novel.”

Source: Human images from world’s first total-body scanner unveiled

Talk about a cache flow problem: This JavaScript can snoop on other browser tabs to work out what you’re visiting

Computer science boffins have demonstrated a side-channel attack technique that bypasses recently-introduced privacy defenses, and makes even the Tor browser subject to tracking. The result: it is possible for malicious JavaScript in one web browser tab to spy on other open tabs, and work out which websites you’re visiting.

This information can be used to target adverts at you based on your interests, or otherwise work out the kind of stuff you’re into and collect it in safe-keeping for future reference.

Researchers Anatoly Shusterman, Lachlan Kang, Yarden Haskal, Yosef Meltser, Prateek Mittal, Yossi Oren, Yuval Yarom – from Ben-Gurion University of the Negev in Israel, the University of Adelaide in Australia, and Princeton University in the US – have devised a processor cache-based website fingerprinting attack that uses JavaScript for gathering data to identify visited websites.

The technique is described in a paper recently distributed through ArXiv called “Robust Website Fingerprinting Through the Cache Occupancy Channel.”

“The attack we demonstrated compromises ‘human secrets’: by finding out which websites a user accesses, it can teach the attacker things like a user’s sexual orientation, religious beliefs, political opinions, health conditions, etc.,” said Yossi Oren (Ben-Gurion University) and Yuval Yarom (University of Adelaide) in an email to The Register this week.

It’s thus not as serious as a remote attack technique that allows the execution of arbitrary code or exposes kernel memory, but Oren and Yarom speculate that there may be ways their browser fingerprinting method could be adapted to compromise computing secrets like encryption keys or vulnerable installed software.

Source: Talk about a cache flow problem: This JavaScript can snoop on other browser tabs to work out what you’re visiting • The Register

Facebook files patent to find out more about you by looking at the background items in your pictures and pictures you are tagged in

An online system predicts household features of a user, e.g., household size and demographic composition, based on image data of the user, e.g., profile photos, photos posted by the user and photos posted by other users socially connected with the user, and textual data in the user’s profile that suggests relationships among individuals shown in the image data of the user. The online system applies one or more models trained using deep learning techniques to generate the predictions. For example, a trained image analysis model identifies each individual depicted in the photos of the user; a trained text analysis model derive household member relationship information from the user’s profile data and tags associated with the photos. The online system uses the predictions to build more information about the user and his/her household in the online system, and provide improved and targeted content delivery to the user and the user’s household.

Source: United States Patent Application: 0180332140

Most ATMs can be hacked in under 20 minutes

An extensive testing session carried out by bank security experts at Positive Technologies has revealed that most ATMs can be hacked in under 20 minutes, and even less, in certain types of attacks.

Experts tested ATMs from NCR, Diebold Nixdorf, and GRGBanking, and detailed their findings in a 22-page report published this week.

The attacks they tried are the typical types of exploits and tricks used by cyber-criminals seeking to obtain money from the ATM safe or to copy the details of users’ bank cards (also known as skimming).

atm-network-attack.png
Image: Positive Technologies

Experts said that 85 percent of the ATMs they tested allowed an attacker access to the network. The research team did this by either unplugging and tapping into Ethernet cables, or by spoofing wireless connections or devices to which the ATM usually connected to.

Researchers said that 27 percent of the tested ATMs were vulnerable to having their processing center communications spoofed, while 58 percent of tested ATMs had vulnerabilities in their network components or services that could be exploited to control the ATM remotely.

Furthermore, 23 percent of the tested ATMs could be attacked and exploited by targeting other network devices connected to the ATM, such as, for example, GSM modems or routers.

“Consequences include disabling security mechanisms and controlling output of banknotes from the dispenser,” researchers said in their report.

PT experts said that the typical “network attack” took under 15 minutes to execute, based on their tests.

atm-black-box-attack.png
Image: Positive Technologies

But in case ATM hackers were looking for a faster way in, researchers also found that Black Box attacks were the fastest, usually taking under 10 minutes to pull off.

A Black Box attack is when a hacker either opens the ATM case or drills a hole in it to reach the cable connecting the ATM’s computer to the ATM’s cash box (or safe). Attackers then connect a custom-made tool, called a Black Box, that tricks the ATM into dispensing cash on demand.

PT says that 69 percent of the ATMs they tested were vulnerable to such attacks and that on 19 percent of ATMs, there were no protections against Black Box attacks at all.

atm-exit-kiosk-mode-attack.png
Image: Positive Technologies

Another way through which researchers attacked the tested ATMs was by trying to exit kiosk mode –the OS mode in which the ATM interface runs in.

Researchers found that by plugging a device into one of the ATM’s USB or PS/2 interfaces, they could pluck the ATM from kiosk mode and run commands on the underlying OS to cash out money from the ATM safe.

The PT team says this attack usually takes under 15 minutes, and that 76 percent of the tested ATMs were vulnerable.

atm-hard-drive-attack.png
Image: Positive Technologies

Another attack, and the one that took the longest to pull off but yielded the highest results, was one during which researchers bypassed the ATM’s internal hard drive and booted from an external one.

PT experts said that 92 percent of the ATMs they tested were vulnerable. This happened because the ATMs either didn’t have a BIOS password, used one that was easy to guess, or didn’t use disk data encryption.

Researchers said that during their tests, which normally didn’t take more than 20 minutes, they changed the boot order in the BIOS, booted the ATM from their own hard drive, and made changes to the ATM’s normal OS on the legitimate hard drive, changes which could permit cash outs or ATM skimming operations.

atm-boot-mode-attack.png
Image: Positive Technologies

In another test, PT researchers also found that attackers with physical access to the ATM could restart the device and force it to boot into a safe/debug mode.

This, in turn, would allow the attackers access to various debug utilities or COM ports through which they could infect the ATM with malware.

The attack took under 15 minutes to execute, and researchers found that 42 percent of the ATMs they tested were vulnerable.

atm-card-data-transfer-attack.png
Image: Positive Technologies

Last but not least, the most depressing results came in regards to tests of how ATMs transmitted card data internally, or to the bank.

PT researchers said they were able to intercept card data sent between the tested ATMs and a bank processing center in 58 percent of the cases, but they were 100 percent successful in intercepting card data while it was processed internally inside the ATM, such as when it was transmitted from the card reader to the ATM’s OS.

This attack also took under 15 minutes to pull off. Taking into account that most real-world ATM attacks happen during the night and target ATMs in isolated locations, 20 minutes is more than enough for most criminal operations.

“More often than not, security mechanisms are a mere nuisance for attackers: our testers found ways to bypass protection in almost every case,” the PT team said. “Since banks tend to use the same configuration on large numbers of ATMs, a successful attack on a single ATM can be easily replicated at greater scale.”

The following ATMs were tested.

atms-tested.jpg

Source: Most ATMs can be hacked in under 20 minutes | ZDNet

Microsoft slips ads into Windows 10 Mail client – then U-turns so hard, it warps fabric of reality – Windows is an OS, not a service!

Microsoft was, and maybe still is, considering injecting targeted adverts into the Windows 10 Mail app.

The ads would appear at the top of inboxes of folks using the client without a paid-for Office 365 subscription, and the advertising would be tailored to their interests. Revenues from the banners were hoped to help keep Microsoft afloat, which banked just $16bn in profit in its latest financial year.

According to Aggiornamenti Lumia on Friday, folks using Windows Insider fast-track builds of Mail and Calendar, specifically version 11605.11029.20059.0, may have seen the ads in among their messages, depending on their location. Users in Brazil, Canada, Australia, and India were chosen as guinea pigs for this experiment.

A now-deleted FAQ on the Office.com website about the “feature” explained the advertising space would be sold off to help Microsoft “provide, support, and improve some of our products,” just like Gmail and Yahoo! Mail display ads.

Also, the advertising is targeted, by monitoring what you get up to with apps and web browsing, and using demographic information you disclose:

Windows generates a unique advertising ID for each user on a device. When the advertising ID is enabled, both Microsoft apps and third-party apps can access and use the advertising ID in much the same way that websites can access and use a unique identifier stored in a cookie. Mail uses this ID to provide more relevant advertising to you.

You have full control of Windows and Mail having access to this information and can turn off interest-based advertising at any time. If you turn off interest-based advertising, you will still see ads but they will no longer be as relevant to your interests.

Microsoft does not use your personal information, like the content of your email, calendar, or contacts, to target you for ads. We do not use the content in your mailbox or in the Mail app.

You can also close an ad banner by clicking on its trash can icon, or get rid of them completely by coughing up cash:

You can permanently remove ads by buying an Office 365 Home or Office 365 Personal subscription.

Here’s where reality is thrown into a spin, literally. Microsoft PR supremo Frank Shaw said a few hours ago, after the ads were spotted:

This was an experimental feature that was never intended to be tested broadly and it is being turned off.

Never intended to be tested broadly, and was shut down immediately, yet until it was clocked, had an official FAQ for it on Office.com, which was also hastily nuked from orbit, and was rolled out in highly populated nations. Talk about hand caught in the cookie jar.

Source: Microsoft slips ads into Windows 10 Mail client – then U-turns so hard, it warps fabric of reality • The Register

A 100,000-router botnet is feeding on a 5-year-old UPnP bug in Broadcom chips (lots of different routers have this chip!)

A recently discovered botnet has taken control of an eye-popping 100,000 home and small-office routers made from a range of manufacturers, mainly by exploiting a critical vulnerability that has remained unaddressed on infected devices more than five years after it came to light.

Researchers from Netlab 360, who reported the mass infection late last week, have dubbed the botnet BCMUPnP_Hunter. The name is a reference to a buggy implementation of the Universal Plug and Play protocol built into Broadcom chipsets used in vulnerable devices. An advisory released in January 2013 warned that the critical flaw affected routers from a raft of manufacturers, including Broadcom, Asus, Cisco, TP-Link, Zyxel, D-Link, Netgear, and US Robotics. The finding from Netlab 360 suggests that many vulnerable devices were allowed to run without ever being patched or locked down through other means.

Last week’s report documents 116 different types of devices that make up the botnet from a diverse group of manufacturers. Once under the attackers’ control, the routers connect to a variety of well-known email services. This is a strong indication that the infected devices are being used to send spam or other types of malicious mail.

Universal Plug and Play

UPnP is designed to make it easy for computers, printers, phones, and other devices to connect to local networks using code that lets them automatically discover each other. The protocol often eliminates the hassle of figuring out how to configure devices the first time they’re connected. But UPnP, as researchers have warned for years, often opens up serious holes inside the networks that use it. In some cases, UPnP bugs cause devices to respond to discovery requests sent from outside the network. Hackers can exploit the weakness in a way that allows them to take control of the devices. UPnP weaknesses can also allow hackers to bypass firewall protections.

Source: A 100,000-router botnet is feeding on a 5-year-old UPnP bug in Broadcom chips | Ars Technica

The Art Institute of Chicago has thousands of art pieces scanned high resolution, many under public domain

Discover art by Van Gogh, Picasso, Warhol & more in the Art Institute’s collection spanning 5,000 years of creativity.

Source: Discover Art & Artists | The Art Institute of Chicago

Can AI Create True Art?

just last month, AI-generated art arrived on the world auction stage under the auspices of Christie’s, proving that artificial intelligence can not only be creative but also produce world class works of art—another profound AI milestone blurring the line between human and machine.

Naturally, the news sparked debates about whether the work produced by Paris-based art collective Obvious could really be called art at all. Popular opinion among creatives is that art is a process by which human beings express some idea or emotion, filter it through personal experience and set it against a broader cultural context—suggesting then that what AI generates at the behest of computer scientists is definitely not art, or at all creative.

By artist #2 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

The story raised additional questions about ownership. In this circumstance, who can really be named as author? The algorithm itself or the team behind it? Given that AI is taught and programmed by humans, has the human creative process really been identically replicated or are we still the ultimate masters?

AI VERSUS HUMAN

At GumGum, an AI company that focuses on computer vision, we wanted to explore the intersection of AI and art by devising a Turing Test of our own in association with Rutgers University’s Art and Artificial Intelligence Lab and Cloudpainter, an artificially intelligent painting robot. We were keen to see whether AI can, in fact, replicate the intent and imagination of traditional artists, and we wanted to explore the potential impact of AI on the creative sector.

By artist #3 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

To do this, we enlisted a broad collection of diverse artists from “traditional” paint-on-canvas artists to 3-D rendering and modeling artists alongside Pindar Van Arman—a classically trained artist who has been coding art robots for 15 years. Van Arman was tasked with using his Cloudpainter machine to create pieces of art based on the same data set as the more traditional artists. This data set was a collection of art by 20th century American Abstract Expressionists. Then, we asked them to document the process, showing us their preferred tools and telling us how they came to their final work.

By artist #4 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

Intriguingly, while at face value the AI artwork was indistinguishable from that of the more traditional artists, the test highlighted that the creative spark and ultimate agency behind creating a work of art is still very much human. Even though the Cloudpainter machine has evolved over time to become a highly intelligent system capable of making creative decisions of its own accord, the final piece of work could only be described as a collaboration between human and machine. Van Arman served as more of an “art director” for the painting. Although Cloudpainter made all of the aesthetic decisions independently, the machine was given parameters to meet and was programed to refine its results in order to deliver the desired outcome. This was not too dissimilar to the process used by Obvious and their GAN AI tool.

By artist #5 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

Moreover, until AI can be programed to absorb inspiration, crave communication and want to express something in a creative way, the work it creates on its own simply cannot be considered art without the intention of its human masters. Creatives working with AI find the process to be more about negotiation than experimentation. It’s clear that even in the creative field, sophisticated technologies can be used to enhance our capabilities—but crucially they still require human intelligence to define the overarching rules and steer the way.

THERE’S AN ACTIVE ROLE BETWEEN ART AND VIEWER

How traditional art purveyors react to AI art on the world stage is yet to be seen, but in the words of Leandro Castelao—one of the artists we enlisted for the study—“there’s an active role between the piece of art and the viewer. In the end, the viewer is the co-creator, transforming, re-creating and changing.” This is a crucial point; when it’s difficult to tell AI art apart from human art, the old adage that beauty is in the eye of the beholder rings particularly true.

Source: Can AI Create True Art? – Scientific American Blog Network

AIs Are Getting Better At Playing Video Games…By Cheating

Earlier this year, researchers tried teaching an AI to play the original Sonic the Hedgehog as part of the The OpenAI Retro Contest. The AI was told to prioritize increasing its score, which in Sonic means doing stuff like defeating enemies and collecting rings while also trying to beat a level as fast as possible. This dogged pursuit of one particular definition of success led to strange results: In one case, the AI began glitching through walls in the game’s water zones in order to finish more quickly.

It was a creative solution to the problem laid out in front of the AI, which ended up discovering accidental shortcuts while trying to move right. But it wasn’t quite what the researchers had intended. One of researchers’ goals with machine-learning AIs in gaming is to try and emulate player behavior by feeding them large amounts of player generated data. In effect, the AI watches humans conduct an activity, like playing through a Sonic level, and then tries to do the same, while being able to incorporate its own attempts into its learning. In a lot of instances, machine learning AIs end up taking their directions literally. Instead of completing a variety of objectives, a machine-learning AI might try to take shortcuts that completely upend human beings’ understanding of how a game should be played.

GIF: OpenAI (Sonic )

Victoria Krakovna, a researcher on Google’s DeepMind AI project, has spent the last several months collecting examples like the Sonic one. Her growing collection has recently drawn new attention after being shared on Twitter by Jim Crawford, developer of the puzzle series Frog Fractions, among other developers and journalists. Each example includes what she calls “reinforcement learning agents hacking the reward function,” which results in part from unclear directions on the part of the programmers.

“While ‘specification gaming’ is a somewhat vague category, it is particularly referring to behaviors that are clearly hacks, not just suboptimal solutions,” she wrote in her initial blog post on the subject. “A classic example is OpenAI’s demo of a reinforcement learning agent in a boat racing game going in circles and repeatedly hitting the same reward targets instead of actually playing the game.”

Source: AIs Are Getting Better At Playing Video Games…By Cheating

Couple Who Ran retro ROM Site (with games you can’t buy any more) to Pay Nintendo $12 Million

Nintendo has won a lawsuit seeking to take two large retro-game ROM sites offline, on charges of copyright infringement. The judgement, made public today, ruled in Nintendo’s favour and states that the owners of the sites LoveROMS.com and LoveRETRO.co, will have to pay a total settlement of $12 million to Nintendo. The complaint was originally filed by the company in an Arizona federal court in July, and has since lead to a swift purge of self-censorship by popular retro and emulator ROM sites, who have feared they may be sued by Nintendo as well.

LoveROMS.com and LoveRETRO.co were the joint property of couple Jacob and Cristian Mathias, before Nintendo sued them for what they have called “brazen and mass-scale infringement of Nintendo’s intellectual property rights.” The suit never went to court; instead, the couple sought to settle after accepting the charge of direct and indirect copyright infringement. TorrentFreak reports that a permanent injunction, prohibiting them from using, sharing, or distributing Nintendo ROMs or other materials again in the future, has been included in the settlement. Additionally all games, game files, and emulators previously on the site and in their custody must be handed over to the Japanese game developer, along with a $12.23 million settlement figure. It is unlikely, as TorrentFreak have reported, that the couple will be obligated to pay the full figure; a smaller settlement has likely been negotiated in private.

Instead, the purpose of the enormous settlement amount is to act as a warning or deterrent to other ROM and emulator sites surviving on the internet. And it’s working.

Motherboard previously reported on the way in which Nintendo’s legal crusade against retro ROM and emulator sites is swiftly eroding a large chunk of retro gaming. The impact of this campaign on video games as a whole is potentially catastrophic. Not all games have been preserved adequately by game publishers and developers. Some are locked down to specific regions and haven’t ever been widely accessible.

The accessibility of video games and the gaming industry has always been defined and limited by economic boundaries. There are a multitude of reasons why retro games can’t be easily or reliably accessed by prospective players, and by wiping out ROM sites Nintendo is erasing huge chunks of gaming history. Limiting the accessibility of old retro titles to this extent will undoubtedly affect the future of video games, with classic titles that shaped modern games and gaming development being kept under lock and key by the monolithic hand of powerful game developers.

Since the filing of the suit in July EmuParadise, a haven for retro games and emulator titles, has shut down. Many other sites have followed suit.

Source: Couple Who Ran ROM Site to Pay Nintendo $12 Million – Motherboard

Wow, that’s a sure fire way to piss off your fans, Nintendo!

Rocket Lab’s Modest Launch Is Giant Leap for Small Rocket Business: BTW it didn’t blow up, Elon!

A small rocket from a little-known company lifted off Sunday from the east coast of New Zealand, carrying a clutch of tiny satellites. That modest event — the first commercial launch by a U.S.-New Zealand company known as Rocket Lab — could mark the beginning of a new era in the space business, where countless small rockets pop off from spaceports around the world. This miniaturization of rockets and spacecraft places outer space within reach of a broader swath of the economy.

The rocket, called the Electron, is a mere sliver compared to the giant rockets that Elon Musk, of SpaceX, and Jeffrey P. Bezos, of Blue Origin, envisage using to send people into the solar system. It is just 56 feet tall and can carry only 500 pounds into space.

But Rocket Lab is aiming for markets closer to home.

“We’re FedEx,” said Peter Beck, the New Zealand-born founder and chief executive of Rocket Lab. “We’re a little man that delivers a parcel to your door.”

Behind Rocket Lab, a host of start-up companies are also jockeying to provide transportation to space for a growing number of small satellites. The payloads include constellations of telecommunications satellites that would provide the world with ubiquitous internet access.

The payload of this mission, which Rocket Lab whimsically named “It’s Business Time,” offered a glimpse of this future: two ship-tracking satellites for Spire Global; a small climate- and environment-monitoring satellite for GeoOptics; a small probe built by high school students in Irvine, Calif., and a demonstration version of a drag sail that would pull defunct satellites out of orbit.

Space Angels, a space-business investment firm, is tracking 150 small launch companies. Chad Anderson, Space Angel’s chief executive, said that although the vast majority of these companies will fail, a small group possess the financing and engineering wherewithal to get off the ground.

Each company on Mr. Anderson’s list proffers its own twist in business plan or capability:

  • Vector Launch Inc. aims for mass production;

  • Virgin Orbit, a piece of Richard Branson’s business empire, will drop its rockets from the bottom of a 747 at 35,000 feet up;

  • Relativity Space plans to 3-D print almost all pieces of its rockets;

  • Firefly Aerospace will offer a slightly larger rocket in a bet that the small satellites will grow a bit in size and weight;

  • Gilmour Space Technologies is a rare Australian aerospace company;

  • And Astra Space Inc., which is operating in stealth mode like a Silicon Valley start-up, saying nothing about what is doing.

    [Sign up to get reminders for space and astronomy events on your calendar.]

Image
Daniel Bryce, a manufactering operations manager working on a satellite at Spire Global in Glasgow.CreditAndy Buchanan/Agence France-Presse — Getty Images

Rockets are shrinking, because satellites are shrinking.

In the past, hulking telecommunications satellites hovered 22,000 miles above the Equator in what is known as a geosynchronous orbit, where a satellite continuously remains over the same spot on Earth. Because sending a satellite, there was so expensive, it made sense to pack as much as possible into each one.

Advances in technology and computer chips have enabled smaller satellites to perform the same tasks as their predecessors. And constellations of hundreds or thousands of small satellites, orbiting at lower altitudes that are easier to reach, can mimic the capabilities once possible only from a fixed geosynchronous position.

“It’s really a shift in the market,” Mr. Beck said. “What once took the size of a car is now the size of a microwave oven, and with exactly the same kind of capabilities.”

Some companies already have launched swarms of satellites to make observations of Earth. Next up are the promised space-based internet systems such as OneWeb and SpaceX’s Starlink.

Until now, such small spacecraft typically hitched a rocket ride alongside a larger satellite. That trip is cheaper but inconvenient, because the schedule is set by the main customer. If the big satellite is delayed, the smaller ones stay on the ground, too. “You just can’t go to business like that,” Mr. Beck said.

The Electron, Mr. Beck said, is capable of lifting more than 60 percent of the spacecraft that headed to orbit last year. By contrast, space analysts wonder how much of a market exists for a behemoth like SpaceX’s Falcon Heavy, which had its first spectacular launch in February.

A Falcon Heavy can lift a payload 300 times heavier than a Rocket Lab Electron, but it costs $90 million compared to the Electron’s $5 million. Whereas SpaceX’s standard Falcon 9 rocket has no shortage of customers, the Heavy has only announced a half-dozen customers for the years to come.

The United States military — a primary customer for large launch vehicles — is also rethinking its spy satellites. The system would be more resilient, some analysts think, if its capabilities were spread among many, smaller satellites. Smaller satellites would be easier and quicker to replace, and an enemy would have a harder time destroying all of them.

Image

The Rocket Lab production facility. Its rockets cost $5 million.CreditRocket Lab
Image

A Rocket Lab Rutherford engine test.CreditRocket Lab
Image

An Electron “Still Testing” rocket with Shaun D’Mello, Rocket Lab’s vice president of launch.CreditRocket Lab

SpaceX could have cornered this market a decade ago.

Its first rocket, the Falcon 1, was designed to lift about 1,500 pounds. But after just two successful launches, SpaceX abandoned it, focusing on the much larger Falcon 9 to serve NASA’s needs to carry cargo and, eventually, astronauts to the International Space Station.

Jim Cantrell, one of the first employees of SpaceX, did not understand that decision and left the company. In 2015, he started Vector Launch, Inc., with headquarters in Tucson. Its goal is to make the Model T of rockets — small, cheap, mass-produced.

Vector claims that it can send its rockets into orbit from almost any place it can set up its mobile launch platform, which is basically a heavily modified trailer. That trailer was inspired by Mr. Cantrell’s hobby, auto racing, and many of the companies’ employees come from the racing world, too.

The company is still aiming to meet its goal of getting the first of its Vector-R rockets to orbit this year, but Mr. Cantrell admitted that the schedule might slip again, into early 2019. The flight termination system — the piece of hardware that disables the rocket if anything goes wrong — is late in arriving.

“There are a lot of little things,” Mr. Cantrell said. “It drives you crazy.”

A prototype was planned for suborbital launch from Mojave, Calif., in September, but it encountered a glitch and the test was called off. The crew put the rocket in a racecar trailer and drove it to Vector’s testing site at Pinal Airpark, a small airport a half-hour outside of Tucson that is surrounded 350 acres of shrubby desert.

Vector built test stands for firings of individual engines as well as completed rocket stages. During a recent visit to the site, engineers were troubleshooting the launch problems of both the prototype rocket and a developmental version of its upper-stage engine.

Soon the team will head to the Pacific Spaceport Complex, on Alaska’s Kodiak Island, for its first orbital launch. Next year, Mr. Cantrell said, the company hopes to put a dozen rockets into space.

Within a few years, he added, it could be launching 100 times a year, not just from Kodiak but also from Vandenberg Air Force Base in California and Wallops Island in Virginia, where Rocket Lab agreed in October to build its second launch complex. Vector is also looking for additional launch sites, including one by the Sea of Cortez in Mexico.

Image

Space analysts wonder how much of a market exists for a behemoth like SpaceX’s Falcon Heavy, which first launched in February. Though it can lift far heavier payloads than the Electron, the Heavy has only a half-dozen announced customers.CreditThom Baur/Reuters

Tom Markusic, another veteran of SpaceX’s early days, also sees an opportunity to help smaller satellites get to space.

“I didn’t feel there was a properly sized launch company to address that market,” he said.

Mr. Markusic said that the need for stronger antennas and cameras would ultimately prompt the construction of slightly bigger small satellites, and that it would be beneficial to be able to launch several at a time.

He started Firefly in 2014, aiming to build Alpha, a rocket that would lift a 900-pound payload to orbit.

The company grew to 150 employees and won a NASA contract. But in the uncertainty surrounding Britain’s exit from the European Union, a European investor backed out. An American investor also became skittish, Mr. Markusic said, after a SpaceX rocket exploded on the launchpad in 2016. Firefly shut down, and the employees lost their jobs.

At an auction, a Ukraine-born entrepreneur, Max Polyakov, one of Firefly’s investors, resurrected the company. Mr. Markusic took the opportunity to rethink the Alpha rocket, which is now able to launch more than 2,000 pounds.

“Alpha is basically Falcon 1 with some better technology,” he said.

Mr. Markusic said his competition was not the smaller rockets of Rocket Lab, Vector or Virgin Orbit but foreign competitors such as a government-subsidized rocket from India and commercial endeavors in China. But he complimented Rocket Lab.

“They’re ahead of everyone else,” he said. “I think they deserve a lot of credit.”

Firefly plans to launch its first Alpha rocket in December of 2019.

Image

A LauncherOne rocket under the wing of a Virgin Orbit Boeing 747, which releases the rocket mid-air at 35,000 feet. CreditGreg Robinson/Virgin Orbit, via Associated Press

Not everyone is convinced that the market for small satellites will be as robust as predicted.

“That equation has weaknesses at every step,” said Carissa Christensen, founder and chief executive of Bryce Space and Technology, an aerospace consulting firm.

Three-quarters of venture capital-financed companies fail, she said, and the same will likely happen to the companies aiming to put up the small satellites. She also is skeptical that space-based internet will win against ground-based alternatives.

“Publicly, there’s no compelling business plans,” she said.

That means that the market for small rockets could implode for lack of business. She said a key to survival would be to tap into the needs of the United States government, especially the military. Virgin Orbit, Vector and Rocket Lab were the current front-runners, she said.

The small rocket companies also have to compete with Spaceflight Industries, a Seattle company that resells empty space on larger rockets that is not taken up by the main payload. In addition, Spaceflight is looking to purchasing entire rockets launched by other companies, including Rocket Lab, and selling the payload space to a range of companies heading to a similar orbit.

The first such flight, using a SpaceX Falcon 9, is to launch from Vandenberg Air Force Base this month carrying 70 satellites, in what the company compares to a bus ride into orbit.

Curt Blake, president of Spaceflight, said that both approaches can work. Buses are cheaper but less convenient, and sometimes the timely lift from a taxi is worth the added cost.

Mr. Anderson of Space Angels was also optimistic. “The difference today is how robust the sector is,” he said. “The sector today can handle failures.”

While the sector is getting off the ground, Rocket Lab doesn’t intend to waste any more time: it is hoping to quickly follow “It’s Business Time” with a second commercial launch next month, and then a third the month after that.

“We’re very focused on the next 100 rockets, not the next one rocket.” Mr. Beck said. “It’s one thing to go to orbit. It’s a whole another thing to go to orbit on a regular basis.”

Source: Rocket Lab’s Modest Launch Is Giant Leap for Small Rocket Business – The New York Times

Study opens route to ultra-low-power microchips

A new approach to controlling magnetism in a microchip could open the doors to memory, computing, and sensing devices that consume drastically less power than existing versions. The approach could also overcome some of the inherent physical limitations that have been slowing progress in this area until now.

Researchers at MIT and at Brookhaven National Laboratory have demonstrated that they can control the magnetic properties of a thin-film material simply by applying a small voltage. Changes in magnetic orientation made in this way remain in their new state without the need for any ongoing power, unlike today’s standard memory chips, the team has found.

The new finding is being reported today in the journal Nature Materials, in a paper by Geoffrey Beach, a professor of materials science and engineering and co-director of the MIT Materials Research Laboratory; graduate student Aik Jun Tan; and eight others at MIT and Brookhaven.

Source: Study opens route to ultra-low-power microchips | MIT News

HTTP-over-QUIC to be renamed HTTP/3

The HTTP-over-QUIC experimental protocol will be renamed to HTTP/3 and is expected to become the third official version of the HTTP protocol, officials at the Internet Engineering Task Force (IETF) have revealed.

This will become the second Google-developed experimental technology to become an official HTTP protocol upgrade after Google’s SPDY technology became the base of HTTP/2.

HTTP-over-QUIC is a rewrite of the HTTP protocol that uses Google’s QUIC instead of TCP (Transmission Control Protocol) as its base technology.

QUIC stands for “Quick UDP Internet Connections” and is, itself, Google’s attempt at rewriting the TCP protocol as an improved technology that combines HTTP/2, TCP, UDP, and TLS (for encryption), among many other things.

Google wants QUIC to slowly replace both TCP and UDP as the new protocol of choice for moving binary data across the Internet, and for good reasons, as test have proven that QUIC is both faster and more secure because of its encrypted-by-default implementation (current HTTP-over-QUIC protocol draft uses the newly released TLS 1.3 protocol).

0rtt-graphic.png
Image: Google

QUIC was proposed as a draft standard at the IETF in 2015, and HTTP-over-QUIC, a re-write of HTTP on top of QUIC instead of TCP, was proposed a year later, in July 2016.

Since then, HTTP-over-QUIC support was added inside Chrome 29 and Opera 16, but also in LiteSpeed web servers. While initially, only Google’s servers supported HTTP-over-QUIC connections, this year, Facebook also started adopting the technology.

In a mailing list discussion last month, Mark Nottingham, Chair of the IETF HTTP and QUIC Working Group, made the official request to rename HTTP-over-QUIC as HTTP/3, and pass it’s development from the QUIC Working Group to the HTTP Working Group.

In the subsequent discussions that followed and stretched over several days, Nottingham’s proposal was accepted by fellow IETF members, who gave their official seal of approval that HTTP-over-QUIC become HTTP/3, the next major iteration of the HTTP protocol, the technology that underpins today’s World Wide Web.

According to web statistics portal W3Techs, as of November 2018, 31.2 percent of the top 10 million websites support HTTP/2, while only 1.2 percent support QUIC.

Source: HTTP-over-QUIC to be renamed HTTP/3 | ZDNet

Google traffic routed to Russian and Chinese servers in BGP attack

People’s connections in the US to Google – including its cloud, YouTube, and other websites – were suddenly rerouted through Russia and into China in a textbook Border Gateway Protocol (BGP) hijacking attack.

That means folks in Texas, California, Ohio, and so on, firing up their browsers and software and connecting to Google and its services were instead talking to systems in Russia and China, and not servers belonging to the Silicon Valley giant. Netizens outside of America may also have been affected.

The Chocolate Factory confirmed that for a period on Monday afternoon, from 1312 to 1435 Pacific Time, connections to Google Cloud, its APIs, and websites were being diverted through IP addresses belonging to overseas ISPs. Sites and apps built on Google Cloud, such as Spotify, Nest, and Snapchat, were also brought down by the interception.

Specifically, network connectivity to Google was instead routed through TransTelekom in Russia (mskn17ra-lo1.transtelecom.net), and into a China Telecom gateway (ChinaTelecom-gw.transtelecom.net) that black-holed the packets. Both nodes have since stopped resolving to IP addresses.

The black-hole effect meant Google and YouTube, and apps and sites that relied on Google Cloud, appeared to be offline to netizens. It is possible information not securely encrypted could have been intercepted by the aforementioned rogue nodes, however, our understanding is, due to the black-hole effect, it’s likely most if not all connections weren’t: TCP connections would fail to establish, and no information would be transferred. That’s the best case scenario, at least.

Source: OK Google, why was your web traffic hijacked and routed through China, Russia today? • The Register

UPDATE: Nigerian firm Main One Cable Co takes blame for routing Google traffic through China

How to Quit Google Completely

Despite all the convenience and quality of Google’s sprawling ecosystem, some users are fed up with the fishy privacy policies the company has recently implemented in Gmail, Chrome, and other services. To its credit, Google has made good changes in response to user feedback, but that doesn’t diminish the company’s looming shadow over the internet at large. If you’re ready to ditch Google, or even just reduce its presence in your digital life, this guide is here to help.

Since Google owns some of the best and most-used apps, websites, and internet services, making a clean break is difficult—but not impossible. We’re going to take a look at how to leave the most popular Google services behind, and how to keep Google from tracking your data. We’ve also spent serious time researching and testing great alternatives to Google’s offerings, so you can leave the Big G without having to buy new devices or swear fealty to another major corporation.

Source: How to Quit Google Completely

Windows 10 Pro goes Home as Microsoft fires up downgrade server

Microsoft’s activation servers appear to be on the blink this morning – some Windows 10 users woke up to find their Pro systems have, er, gone Home.

Twitter user Matt Wadley was one of the first out of the gate, complaining that following an update to the freshly released Insider build of next year’s Windows, his machine suddenly thought it had a Windows 10 Home licence.

While Insider build 18277, which appeared yesterday, contains lots of goodies, including improvements to Focus Assist to stop notifications bothering customers using apps in fullscreen mode, improvements to High-DPI, and an intriguing setting to allow users to manage camera and mic settings in Application Guard for Edge, it did not mention anything about borking the machine’s licence.

To the relief of the bruised Insider team, but no one else, it soon became apparent that the issue is not just isolated to those brave souls trying out preview code, but over many versions of Windows 10.

The vast majority of issues reported so far appear to be from users who upgraded from a previous version.

However, some users are also reporting issues with fresh installs.

According to those able to get hold of Microsoft’s call centres, the advice is to wait a while and the problem should fix itself, indicating something has gone awry on the licensing servers, and engineers are currently scrambling to fix it. Your machine should be usable in the meantime.

The Register contacted Microsoft to learn more, and we will update if there is a response.

Luckily, the problem does not look to be too widespread, which will be small comfort to affected users who, er, might want to join a domain, set up Hyper-V or all of the other goodies found in Windows 10 Pro. ®

Updated to add at 1220 UTC

While there remains no official statement from Microsoft on the problem, users have reported that the hardworking support operatives of the Windows giant have warned that there is indeed a “temporary issue” with its activation servers related to the Pro edition. Affected customers are advised to sit tight and wait for a fix. An estimate for when that might be? Anywhere from one to two days. Oh dear.

Updated to add at 2150 UTC

Folks are reporting the licensing issues are fixed. Click on Troubleshoot in the activation error window, and it should resolve itself. You may have to reboot and run Windows Update to nudge it along, we’re told.

“We’re working to restore product activations for the limited number of affected Windows 10 Pro customers,” Microsoft senior director Jeff Jones told us earlier this evening.

Final update at 0100 UTC

If you can’t get rid of the activation error, don’t worry, it should clear by itself, a Microsoft spokesman said – now that Redmond’s techies have sufficiently bashed their machines with spanners:

A limited number of customers experienced an activation issue that our engineers have now addressed. Affected customers will see resolution over the next 24 hours as the solution is applied automatically. In the meantime, they can continue to use Windows 10 Pro as usual.

Source: Windows 10 Pro goes Home as Microsoft fires up downgrade server

Having your OS depend on an external activation server is not a good idea…

Google is using AI to help The New York Times digitize 5 million historical photos

The New York Times doesn’t keep bodies in its “morgue” — it keeps pictures. In a basement under its Times Square office, stuffed into cabinets and drawers, the Times stores between 5 million and 7 million images, along with information about when they were published and why. Now, the paper is working with Google to digitize its huge collection.

The morgue (as the basement storage area is known) contains pictures going back to the 19th century, many of which exist nowhere else in the world. “[It’s] a treasure trove of perishable documents,” says the NYT’s chief technology officer Nick Rockwell. “A priceless chronicle of not just The Times’s history, but of nearly more than a century of global events that have shaped our modern world.”

That’s why the company has hired Google, which will use its machine vision smarts to not only scan the hand- and type-written notes attached to each image, but categorize the semantic information they contain (linking data like locations and dates). Google says the Times will also be able to use its object recognition tools to extract even more information from the photos, making them easier to catalog and resurface for future use.

Source: Google is using AI to help The New York Times digitize 5 million historical photos – The Verge

The US Military Just Publicly Dumped Russian Government Malware Online

This week, US Cyber Command (CYBERCOM), a part of the military tasked with hacking and cybersecurity focused missions, started publicly releasing unclassified samples of adversaries’ malware it has discovered.

CYBERCOM says the move is to improve information sharing among the cybersecurity community, but in some ways it could be seen as a signal to those who hack US systems: we may release your tools to the wider world.

“This is intended to be an enduring and ongoing information sharing effort, and it is not focused on any particular adversary,” Joseph R. Holstead, acting director of public affairs at CYBERCOM told Motherboard in an email.

On Friday, CYBERCOM uploaded multiple files to VirusTotal, a Google-owned search engine and repository for malware. Once uploaded, VirusTotal users can download the malware, see which anti-virus or cybersecurity products likely detect it, and see links to other pieces of malicious code.

One of the two samples CYBERCOM distributed on Friday is marked as coming from APT28, a Russian government-linked hacking group, by several different cybersecurity firms, according to VirusTotal. Those include Kaspersky Lab, Symantec, and Crowdstrike, among others. APT28 is also known as Sofacy and Fancy Bear.

Adam Meyers, vice president of intelligence at CrowdStrike said that the sample did appear new, but the company’s tools detected it as malicious upon first contact. Kurt Baumgartner, principal security researcher at Kaspersky Lab, told Motherboard in an email that the sample “was known to Kaspersky Lab in late 2017,” and was used in attacks in Central Asia and Southeastern Europe at the time.

Source: The US Military Just Publicly Dumped Russian Government Malware Online – Motherboard

OpenAI releases learning site for Reinforcement Learning: Spinning Up in Deep RL!

Welcome to Spinning Up in Deep RL! This is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL).

For the unfamiliar: reinforcement learning (RL) is a machine learning approach for teaching agents how to solve tasks by trial and error. Deep RL refers to the combination of RL with deep learning.

This module contains a variety of helpful resources, including:

  • a short introduction to RL terminology, kinds of algorithms, and basic theory,
  • an essay about how to grow into an RL research role,
  • a curated list of important papers organized by topic,
  • a well-documented code repo of short, standalone implementations of key algorithms,
  • and a few exercises to serve as warm-ups.

Why We Built This

One of the single most common questions that we hear is

If I want to contribute to AI safety, how do I get started?

Source: Welcome to Spinning Up in Deep RL! — Spinning Up documentation

Artificial intelligence predicts Alzheimer’s years before diagnosis

Timely diagnosis of Alzheimer’s disease is extremely important, as treatments and interventions are more effective early in the course of the disease. However, early diagnosis has proven to be challenging. Research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain, but these changes can be difficult to recognize.

[…]

The researchers trained the deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan, FDG, a radioactive glucose compound, is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity.

The researchers had access to data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multi-site study focused on clinical trials to improve prevention and treatment of this disease. The ADNI dataset included more than 2,100 FDG-PET brain images from 1,002 patients. Researchers trained the deep learning algorithm on 90 percent of the dataset and then tested it on the remaining 10 percent of the dataset. Through deep learning, the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

Finally, the researchers tested the algorithm on an independent set of 40 imaging exams from 40 patients that it had never studied. The algorithm achieved 100 percent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance,” Dr. Sohn said. “It was able to predict every single case that advanced to Alzheimer’s disease.”

Source: Artificial intelligence predicts Alzheimer’s years before diagnosis — ScienceDaily

Apple Blocks Linux From Booting and makes Windows hard to boot On New Hardware With T2 Security Chip

Apple’s new-generation Macs come with a new so-called Apple T2 security chip that’s supposed to provide a secure enclave co-processor responsible for powering a series of security features, including Touch ID. At the same time, this security chip enables the secure boot feature on Apple’s computers, and by the looks of things, it’s also responsible for a series of new restrictions that Linux users aren’t going to like.

The issue seems to be that Apple has included security certificates for its own and Microsoft’s operating systems (to allow running Windows via Bootcamp), but not for the certificate that was provided for systems such as Linux. Disabling Secure Boot can overcome this, but also disables access to the machine’s internal storage, making installation of Linux impossible.

Source: Apple Blocks Linux From Booting On New Hardware With T2 Security Chip – Slashdot

Which seems strange, considering most of the Apple computer growth seems to be Linux and Windows guys wanting to run on outdated Apple Hardware.

Virtualbox 0-day posted because Oracle won’t update, allows you to execute on the underlying server

I like VirtualBox and it has nothing to do with why I publish a 0day vulnerability. The reason is my disagreement with contemporary state of infosec, especially of security research and bug bounty:

  1. Wait half a year until a vulnerability is patched is considered fine.
  2. In the bug bounty field these are considered fine:
    1. Wait more than month until a submitted vulnerability is verified and a decision to buy or not to buy is made.
    2. Change the decision on the fly. Today you figured out the bug bounty program will buy bugs in a software, week later you come with bugs and exploits and receive “not interested”.
    3. Have not a precise list of software a bug bounty is interested to buy bugs in. Handy for bug bounties, awkward for researchers.
    4. Have not precise lower and upper bounds of vulnerability prices. There are many things influencing a price but researchers need to know what is worth to work on and what is not.
  3. Delusion of grandeur and marketing bullshit: naming vulnerabilities and creating websites for them; making a thousand conferences in a year; exaggerating importance of own job as a security researcher; considering yourself “a world saviour”. Come down, Your Highness.

I’m exhausted of the first two, therefore my move is full disclosure. Infosec, please move forward.

How to protect yourself

Until the patched VirtualBox build is out you can change the network card of your virtual machines to PCnet (either of two) or to Paravirtualized Network. If you can’t, change the mode from NAT to another one. The former way is more secure.

Introduction

A default VirtualBox virtual network device is Intel PRO/1000 MT Desktop (82540EM) and the default network mode is NAT. We will refer to it E1000.

The E1000 has a vulnerability allowing an attacker with root/administrator privileges in a guest to escape to a host ring3. Then the attacker can use existing techniques to escalate privileges to ring 0 via /dev/vboxdrv.

Exploit

The exploit is Linux kernel module (LKM) to load in a guest OS. The Windows case would require a driver differing from the LKM just by an initialization wrapper and kernel API calls.

Elevated privileges are required to load a driver in both OSs. It’s common and isn’t considered an insurmountable obstacle. Look at Pwn2Own contest where researcher use exploit chains: a browser opened a malicious website in the guest OS is exploited, a browser sandbox escape is made to gain full ring 3 access, an operating system vulnerability is exploited to pave a way to ring 0 from where there are anything you need to attack a hypervisor from the guest OS. The most powerful hypervisor vulnerabilities are for sure those that can be exploited from guest ring 3. There in VirtualBox is also such code that is reachable without guest root privileges, and it’s mostly not audited yet.

The exploit is 100% reliable. It means it either works always or never because of mismatched binaries or other, more subtle reasons I didn’t account. It works at least on Ubuntu 16.04 and 18.04 x86_64 guests with default configuration.

 
Skip to toolbar