Machine learning-detected signal predicts time to earthquake

Machine-learning research published in two related papers today in Nature Geoscience reports the detection of seismic signals accurately predicting the Cascadia fault’s slow slippage, a type of failure observed to precede large earthquakes in other subduction zones.

Los Alamos National Laboratory researchers applied machine learning to analyze Cascadia data and discovered the megathrust broadcasts a constant tremor, a fingerprint of the fault’s displacement. More importantly, they found a direct parallel between the loudness of the fault’s acoustic signal and its physical changes. Cascadia’s groans, previously discounted as meaningless noise, foretold its fragility.

“Cascadia’s behavior was buried in the data. Until machine learning revealed precise patterns, we all discarded the continuous signal as noise, but it was full of rich information. We discovered a highly predictable sound pattern that indicates slippage and fault failure,” said Los Alamos scientist Paul Johnson. “We also found a precise link between the fragility of the fault and the signal’s strength, which can help us more accurately predict a megaquake.”

Read more at: https://phys.org/news/2018-12-machine-learning-detected-earthquake.html#jCp

Source: Machine learning-detected signal predicts time to earthquake

Google isn’t the company that we should have handed the Web over to: why MS switching to Chromium is a bad idea

With Microsoft’s decision to end development of its own Web rendering engine and switch to Chromium, control over the Web has functionally been ceded to Google. That’s a worrying turn of events, given the company’s past behavior.

[…]

Google is already a company that exercises considerable influence over the direction of the Web’s development. By owning both the most popular browser, Chrome, and some of the most-visited sites on the Web (in particular the namesake search engine, YouTube, and Gmail), Google has on a number of occasions used its might to deploy proprietary tech and put the rest of the industry in the position of having to catch up.

[…]

This is a company that, time and again, has tried to push the Web into a Google-controlled proprietary direction to improve the performance of Google’s online services when used in conjunction with Google’s browser, consolidating Google’s market positioning and putting everyone else at a disadvantage. Each time, pushback has come from the wider community, and so far, at least, the result has been industry standards that wrest control from Google’s hands. This action might already provoke doubts about the wisdom of handing effective control of the Web’s direction to Google, but at least a case could be made that, in the end, the right thing was done.

But other situations have had less satisfactory resolutions. YouTube has been a particular source of problems. Google controls a large fraction of the Web’s streaming video, and the company has, on a number of occasions, made changes to YouTube that make it worse in Edge and/or Firefox. Sometimes these changes have improved the site experience in Chrome, but even that isn’t always the case.

A person claiming to be a former Edge developer has today described one such action. For no obvious reason, Google changed YouTube to add a hidden, empty HTML element that overlaid each video. This element disabled Edge’s fastest, most efficient hardware accelerated video decoding. It hurt Edge’s battery-life performance and took it below Chrome’s. The change didn’t improve Chrome’s performance and didn’t appear to serve any real purpose; it just hurt Edge, allowing Google to claim that Chrome’s battery life was actually superior to Edge’s. Microsoft asked Google if the company could remove the element, to no avail.

The latest version of Edge addresses the YouTube issue and reinstated Edge’s performance. But when the company talks of having to do extra work to ensure EdgeHTML is compatible with the Web, this is the kind of thing that Microsoft has been forced to do.

[…]

Microsoft’s decision both gives Google an ever-larger slice of the pie and weakens Microsoft’s position as an opposing voice. Even with Edge and Internet Explorer having a diminished share of the market, Microsoft has retained some sway; its IIS Web server commands a significant Web presence, and there’s still value in having new protocols built in to Windows, as it increases their accessibility to software developers.

But now, Microsoft is committed to shipping and supporting whatever proprietary tech Google wants to develop, whether Microsoft likes it or not. Microsoft has been very explicit that its adoption of Chromium is to ensure maximal Chrome compatibility, and the company says that it is developing new engineering processes to ensure that it can rapidly integrate, test, and distribute any changes from upstream—it doesn’t ever want to be in the position of substantially lagging behind Google’s browser.

[…]

Web developers have historically only bothered with such trivia as standards compliance and as a way to test their pages in multiple browsers when the market landscape has forced them to. This is what made Firefox’s early years so painful: most developers tested in Internet Explorer and nothing else, leaving Firefox compatibility to chance. As Firefox, and later Chrome, rose to challenge Internet Explorer’s dominance, cross-browser testing became essential, and standards adherence became more valuable.

With Chrome, Firefox, and Edge all as going concerns, a fair amount of discipline is imposed on Web developers. But with Edge removed and Chrome taking a large majority of the market, making the effort to support Firefox becomes more expensive.

Mozilla CEO Chris Beard fears that this consolidation could make things harder for Mozilla—an organization that exists to ensure that the Web remains a competitive landscape that offers meaningful options and isn’t subject to any one company’s control. Mozilla’s position is already tricky, dependent as it is on Google’s funding.

[…]

By relegating Firefox to being the sole secondary browser, Microsoft has just made it that much harder to justify making sites work in Firefox. The company has made designing for Chrome and ignoring everything else a bit more palatable, and Mozilla’s continued existence is now that bit more marginal. Microsoft’s move puts Google in charge of the direction of the Web’s development. Google’s track record shows it shouldn’t be trusted with such a position.

Source: Google isn’t the company that we should have handed the Web over to | Ars Technica

Google’s Feature for Predicting Flight Delays

Google is adding its flight delay predictions feature to the Google Assistant.

That means starting this holiday season, you should be able to ask the Google Assistant if your flight is on time and get a response showing the status of your flight, the length of a delay (if there is one), and even the cause (assuming that info is available)

“Over the next few weeks,” Google says its flight delay predictor will also start notifying you in cases where its system is 85 percent confident, which is deduced by looking at data from past flight records and combining that with a bit a machine learning smarts to determine if your flight might be late. That leaves some room for error, so it’s also important to note that even when Google predicts that your flight is delayed, it may still recommend for you to show up to the airport normally.

Still, in the space of a year, Google seems to have upped its confidence threshold for predicted delays from 80 to 85 percent

Source: Google’s Feature for Predicting Flight Delays Actually Sounds Useful Now

‘Farout,’ the most-distant solar system object discovered yet

For the first time, an object in our solar system has been found more than 100 times farther than Earth is from the sun.

The International Astronomical Union’s Minor Planet Center announced the discovery Monday, calling the object 2018 VG18. But the researchers who found it are calling it “Farout.”
They believe the spherical object is a dwarf planet more than 310 miles in diameter, with a pinkish hue. That color has been associated with objects that are rich in ice, and given its distance from the sun, that isn’t hard to believe. Its slow orbit probably takes more than 1,000 years to make one trip around the sun, the researchers said.
The distance between the Earth and the sun is an AU, or astronomical unit — the equivalent of about 93 million miles. Farout is 120 AU from the sun. Eris, the next most distant object known, is 96 AU from the sun. For reference, Pluto is 34 AU away.
The object was found by the Carnegie Institution for Science’s Scott S. Sheppard, the University of Hawaii’s David Tholen and Northern Arizona University’s Chad Trujillo — and it’s not their first discovery.
The team has been searching for a super-Earth-size planet on the edge of our solar system, known as Planet Nine or Planet X, since 2014. They first suggested the existence of this possible planet in 2014 after finding “Biden” at 84 AU. Along the way, they have discovered more distant solar system objects suggesting that the gravity of something massive is influencing their orbit.

Source: ‘Farout,’ the most-distant solar system object discovered – CNN

Researchers demonstrate teleportation using on-demand photons from quantum dots

A team of researchers from Austria, Italy and Sweden has successfully demonstrated teleportation using on-demand photons from quantum dots. In their paper published in the journal Science Advances, the group explains how they accomplished this feat and how it applies to future quantum communications networks.

Scientists and many others are very interested in developing truly —it is believed that such networks will be safe from hacking or eavesdropping due to their very nature. But, as the researchers with this new effort point out, there are still some problems standing in the way. One of these is the difficulty in amplifying signals. One way to get around this problem, they note, is to generate photons on-demand as part of a quantum repeater—this helps to effectively handle the high clock rates. In this new effort, they have done just that, using semiconductor .

Prior work surrounding the possibility of using has shown that it is a feasible way to demonstrate teleportation, but only under certain conditions, none of which allowed for on-demand applications. Because of that, they have not been considered a push-button technology. In this new effort, the researchers overcame this problem by creating quantum dots that were highly symmetrical using an etching method to create the hole pairs in which the quantum dots develop. The process they used was called a XX (biexciton)–X (exciton) cascade. They then employed a dual-pulsed excitation scheme to populate the desired XX state (after two pairs shed photons, they retained their entanglement). Doing so allowed for the production of on-demand single photons suitable for use in teleportation. The dual pulsed excitation scheme was critical to the process, the team notes, because it minimized re-excitation.

The researchers tested their process first on subjective inputs and then on different quantum dots, proving that it could work across a broad range of applications. They followed that up by creating a framework that other researchers could use as a guide in replicating their efforts. But they also acknowledged that there is still more work to be done (mostly in raising the clock rates) before the could be used in real-world applications. They expect it will be just a few more years.

Read more at: https://phys.org/news/2018-12-teleportation-on-demand-photons-quantum-dots.html#jCp

Source: Researchers demonstrate teleportation using on-demand photons from quantum dots