Fur, Feathers, Hair, and Scales May Have the Same Ancient Origin

New research shows that the processes involved in hair, fur, and feather growth are remarkably similar to the way scales grow on fish—a finding that points to a single, ancient origin of these protective coverings.

When our very early ancestors transitioned from sea to land some 385 million years ago, they brought their armor-like scales along with them. But instead of wasting away like worthless vestigial organs, these scales retailed their utility at the genetic level, providing a springboard for adaptive skin-borne characteristics. Over time, scales turned into feathers, fur, and hair.

We know this from the archaeological record, but as a new research published this week in the science journal eLife shows, we also know this because the molecular processes required to grow hair, fur, and feathers are remarkably similar to the ones involved in the development of fish scales.

Source: Fur, Feathers, Hair, and Scales May Have the Same Ancient Origin

AI can untangle the jumble of neurons packed in brain scans

AI can help neurologists automatically map the connections between different neurons in brain scans, a tedious task that can take hundreds and thousands of hours.

In a paper published in Nature Methods, AI researchers from Google collaborated with scientists from the Max Planck Institute of Neurobiology to inspect the brain of a Zebra Finch, a small Australian bird renowned for its singing.

Although the contents of their craniums are small, Zebra Finches aren’t birdbrains, their connectome* is densely packed with neurons. To study the connections, scientists study a slice of the brain using an electron microscope. It requires high resolution to make out all the different neurites, the nerve cells extending from neurons.

The neural circuits then have to be reconstructed by tracing out the cells. There are several methods that help neurologists flesh these out, but the error rates are high and it still requires human expertise to look over the maps. It’s a painstaking chore, a cubic millimetre of brain tissue can generate over 1,000 terabytes of data.

“A recent estimate put the amount of human labor needed to reconstruct a 1003-µm3 volume at more than 100,000 h, even with an optimized pipeline,” according to the paper.

Now, AI researchers have developed a new method using a recurrent convolutional neural network known as a “flood-filling network”. It’s essentially an algorithm that finds the edges of a neuron path and fleshes out the space in between to build up a map of the different connections.

Here’s a video showing how they work.

“The algorithm is seeded at a specific pixel location and then iteratively “fills” a region using a recurrent convolutional neural network that predicts which pixels are part of the same object as the seed,” said Viren Jain and Michal Januszewski, co-authors of the paper and AI researchers at Google.

The flood-filling network was trained using supervised learning on a small region of a Zebra Finch brain complete with annotations. It’s difficult to measure the accuracy of the network, and instead the researchers use a “expected run length” (ERL) metric that measures how far it can trace out a neuron before making a mistake.

Flood-filling networks have a longer ERL than other deep learning methods that have also been tested on the same dataset. The algorithms were better than humans at identifying dendritic spines, tiny threads jutting off dendrites that help transmit electrical signals to cells. But the level of recall, a property measuring the completeness of the map, was much lower than data collected by a professional neurologist.

Another significant disadvantage of this approach is the high computational cost. “For example, a single pass of the fully convolutional FFN over a full volume is an order of magnitude more computationally expensive than the more traditional 3D convolution-pooling architecture in the baseline approach we used for comparison,” the researchers said.

Source: AI can untangle the jumble of neurons packed in brain scans • The Register

The SIM Hijackers: how hackers take your phone number and then all of your accounts

In the buzzing underground market for stolen social media and gaming handles, a short, unique username can go for between $500 and $5,000, according to people involved in the trade and a review of listings on a popular marketplace. Several hackers involved in the market claimed that the Instagram account @t, for example, recently sold for around $40,000 worth of Bitcoin.

By hijacking Rachel’s phone number, the hackers were able to seize not only Rachel’s Instagram, but her Amazon, Ebay, Paypal, Netflix, and Hulu accounts too. None of the security measures Rachel took to secure some of those accounts, including two-factor authentication, mattered once the hackers took control of her phone number.

In February, T-Mobile sent a mass text warning customers of an “industry-wide” threat. Criminals, the company said, are increasingly utilizing a technique called “port out scam” to target and steal people’s phone numbers. The scam, also known as SIM swapping or SIM hijacking, is simple but tremendously effective.

First, criminals call a cell phone carrier’s tech support number pretending to be their target. They explain to the company’s employee that they “lost” their SIM card, requesting their phone number be transferred, or ported, to a new SIM card that the hackers themselves already own. With a bit of social engineering—perhaps by providing the victim’s Social Security Number or home address (which is often available from one of the many data breaches that have happened in the last few years)—the criminals convince the employee that they really are who they claim to be, at which point the employee ports the phone number to the new SIM card.

Game over.

“With someone’s phone number,” a hacker who does SIM swapping told me, “you can get into every account they own within minutes and they can’t do anything about it.”

Source: The SIM Hijackers – Motherboard

Top Voting Machine Vendor Admits It Installed Remote-Access Software on Systems Sold to States

Remote-access software and modems on election equipment ‘is the worst decision for security short of leaving ballot boxes on a Moscow street corner.’

The nation’s top voting machine maker has admitted in a letter to a federal lawmaker that the company installed remote-access software on election-management systems it sold over a period of six years, raising questions about the security of those systems and the integrity of elections that were conducted with them.

In a letter sent to Sen. Ron Wyden (D-OR) in April and obtained recently by Motherboard, Election Systems and Software acknowledged that it had “provided pcAnywhere remote connection software … to a small number of customers between 2000 and 2006,” which was installed on the election-management system ES&S sold them.

The statement contradicts what the company told me and fact checkers for a story I wrote for the New York Times in February. At that time, a spokesperson said ES&S had never installed pcAnywhere on any election system it sold. “None of the employees, … including long-tenured employees, has any knowledge that our voting systems have ever been sold with remote-access software,” the spokesperson said.

ES&S did not respond on Monday to questions from Motherboard, and it’s not clear why the company changed its response between February and April. Lawmakers, however, have subpoena powers that can compel a company to hand over documents or provide sworn testimony on a matter lawmakers are investigating, and a statement made to lawmakers that is later proven false can have greater consequence for a company than one made to reporters.

Source: Top Voting Machine Vendor Admits It Installed Remote-Access Software on Systems Sold to States – Motherboard

That is incredible poor, especially with all the talk of hackable voting machines.

Blue Origin pushed its rocket ‘to its limits’ with another succesful high-altitude emergency abort test

Update July 18th, 11:35AM ET: Blue Origin pulled off another successful test launch today, landing both the New Shepard rocket and capsule after flight. The company ignited the capsule’s emergency motor after it had separated from the rocket, pushing the spacecraft up to a top altitude of around 74 miles — a new record for Blue Origin. The firing also caused the capsule to sustain up to 10 Gs during the test, but Blue Origin host Ariane Cornell said “that is well within what humans can take, especially for such a short spurt of time.”

[…]

Blue Origin will be igniting the escape motor on the crew capsule. It’s a small engine located on the bottom of the capsule that can quickly propel the spacecraft up and away from the rocket booster in case there is an emergency during the flight. Blue Origin tested out this motor once before during a test launch in October 2016, fully expecting the motor to destroy the booster. When the motor ignites, it slams the booster with 70,000 pounds of thrust and forceful exhaust. And yet, the booster survived the test, managing to land on the floor of the Texas desert.

This time around, Blue Origin plans to ignite the motor at a higher altitude than last time, “pushing the rocket to its limits,” according to the company. It’s unclear how high the ignition will occur, though, and if the booster will survive the test again.

No passengers will be flying on this trip, except for Blue Origin’s test dummy, which the company has named Mannequin Skywalker. Mannequin will be riding inside the crew capsule along with numerous science experiments from NASA, commercial companies, and universities. Santa Fe company Solstar, which flew with Blue Origin during its last launch, is going to test out its Wi-Fi access again during the flight. NASA will have a payload designed to take measurements of the conditions inside the capsule throughout the trip, such as temperature, pressure, and acoustics. There’s even a bunch of payloads made by Blue Origin’s employees as part of the company’s own “Fly My Stuff” program.

Source: Blue Origin pushed its rocket ‘to its limits’ with high-altitude emergency abort test – The Verge

Isn’t it refreshing to see a private space programme that not only doesn’t crash and explode all the time (*cough* Elon) but works better than expected!

Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

Personal details and political affiliations exposed

The server that drew Diachenko’s attention, this time, contained 2,584 files, which the researcher later connected to RoboCent.

The type of user data exposed via Robocent’s bucket included:

⬖  Full Name, suffix, prefix
⬖  Phone numbers (cell and landlines)
⬖  Address with house, street, city, state, zip, precinct
⬖  Political affiliation provided by state, or inferred based on voting history
⬖  Age and birth year
⬖  Gender
⬖  Jurisdiction breakdown based on district, zip code, precinct, county, state
⬖  Demographics based on ethnicity, language, education

Other data found on the servers, but not necessarily personal data, included audio files with prerecorded political messages used for robocalls.

According to RoboCent’s website, the company was not only providing robo-calling services for political surveys and inquiries but was also selling this data in raw format.

“Clients can now purchase voter data directly from their RoboCall provider,” the company’s website reads. “We provide voter files for every need, whether it be for a new RoboCall or simply to update records for door knocking.”

The company sells voter records for a price of 3¢/record. Leaving the core of its business available online on an AWS bucket without authentication is… self-defeating.

Source: Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

AI plus a chemistry robot finds all the reactions that will work

Lee Cronin, the researcher who organized the work, was kind enough to send along an image of the setup, which looks nothing like our typical conception of a robot (the researchers refer to it as “bespoke”). Most of its parts are dispersed through a fume hood, which ensures safe ventilation of any products that somehow escape the system. The upper right is a collection of tanks containing starting materials and pumps that send them into one of six reaction chambers, which can be operated in parallel.

The robot in question. MS = Mass Spectrometer; IR = Infrared Spectrometer.
Enlarge / The robot in question. MS = Mass Spectrometer; IR = Infrared Spectrometer.
Lee Cronin

The outcomes of these reactions can then be sent on for analysis. Pumps can feed samples into an IR spectrometer, a mass spectrometer, and a compact NMR machine—the latter being the only bit of equipment that didn’t fit in the fume hood. Collectively, these can create a fingerprint of the molecules that occupy a reaction chamber. By comparing this to the fingerprint of the starting materials, it’s possible to determine whether a chemical reaction took place and infer some things about its products.

All of that is a substitute for a chemist’s hands, but it doesn’t replace the brains that evaluate potential reactions. That’s where a machine-learning algorithm comes in. The system was given a set of 72 reactions with known products and used those to generate predictions of the outcomes of further reactions. From there, it started choosing reactions at random from the remaining list of options and determining whether they, too, produced products. By the time the algorithm had sampled 10 percent of the total possible reactions, it was able to predict the outcome of untested reactions with more than 80-percent accuracy.

And, since the earlier reactions it tested were chosen at random, the system wasn’t biased by human expectations of what reactions would or wouldn’t work.

Once it had built a model, the system was set up to evaluate which of the remaining possible reactions was most likely to produce products and prioritize testing those. The system could continue on until it reached a set number of reactions, stop after a certain number of tests no longer produced products, or simply go until it tested every possible reaction.

Neural networking

Not content with this degree of success, the research team went on to add a neural network that was provided with data from the research literature on the yield of a class of reactions that links two hydrocarbon chains. After training on nearly 3,500 reactions, the system had an error of only 11 percent when predicting the yield on another 1,700 reactions from the literature.

This system was then integrated with the existing test setup and set loose on reactions that hadn’t been reported in the literature. This allowed the system to prioritize not only by whether the reaction was likely to make a product but also how much of the product would be produced by the reaction.

All this, on its own, is pretty impressive. As the authors put it, “by realizing only 10 percent of the total number of reactions, we can predict the outcomes of the remaining 90 percent without needing to carry out the experiments.” But the system also helped them identify a few surprises—cases where the fingerprint of the reaction mix suggested that the product was something more than a simple combination of starting materials. These reactions were explored further by actual human chemists, who identified both ring-breaking and ring-forming reactions this way.

That last aspect really goes a long way toward explaining how this sort of capability will fit into future chemistry labs. People tend to think of robots as replacing humans. But in this context, the robots are simply taking some of the drudgery away from humans. No sane human would ever consider trying every possible combination of reactants to see what they’d do, and humans couldn’t perform the testing 24 hours a day without dangerous levels of caffeine anyway. The robots will also be good at identifying the rare cases where highly trained intuitions turn out to lead us astray about the utility of trying some reactions.

Source: AI plus a chemistry robot finds all the reactions that will work | Ars Technica

Dutch F-16 flies using fryer fat

The aircraft flew for two weeks on kerosine with 5% biofuel. Unfortunately there is not enough fuel available to allow for more than one aircraft to fly for two weeks. A chicken and egg dilemma.

Een F-16 van Vliegbasis Leeuwarden stootte de afgelopen 2 weken minder CO2 uit tijdens het vliegen. Het toestel koos het luchtruim op kerosine met 5% BioFuel. De proef stopt nu, omdat er op dit moment onvoldoende biobrandstof beschikbaar is om met meer dan 1 toestel of langer dan 2 weken te vliegen.

Source: F-16 vliegt prima op frituurvet | Nieuwsbericht | Defensie.nl

China’s latest quantum radar could help detect stealth planes, missiles

On June 22, China Electronics Technology Group Corporation (CETC), China’s foremost military electronics company, announced that its groundbreaking quantum radar has achieved new gains, which could allow it to detect stealth planes.

The CETC claims its system is now capable of tracking high altitude objects, likely by increasing the coherence time entangled photons. CETC envisions that its quantum radar will be used in the stratosphere to track objects in “the upper atmosphere and beyond” (including space).

While conventional radars just measure the reflection of radio waves, a quantum radar uses entangled photons, which result when a microwave signal beam is entangled with an optical idler beam. The microwave beam’s entangled photons bounce off of the target object and back to the quantum radar. The system compares them with the entangled photons of the optical idler beam. As a result, it can identify the position, radar cross section, speed, direction and other properties of detected objects. Importantly, attempts to spoof the quantum radar would be easily noticed since any attempt to alter or duplicate the entangled photons would be detected by the radar.

Quantum Radar China

Quantum Radar

The quantum radar could ‘observe’ on the composition of the target, since in the state of entanglement, the entangled photons remaining in the radar would show the same changes that transmitted photons would have when interacting with the target (known as quantum correlation).

Li Huifang, Wang Kai, Wang Kaibing, Wu Jun

This shift is important to the back and forth of detection that has long been the story of radars vs stealth planes (which are a crucial feature of US air power). Because stealth aircraft are optimized to elude radio waves used by conventional radars, they would be much more susceptible to detection by their interaction with entangled photons. Additionally, the quantum radar could ‘observe’ on the composition of the target. Such a capability is important not just for detecting aircraft, but would also be very valuable in missile defense, where one could differentiate between an actual nuclear warhead against inflatable decoys.

China Yuanmeng airship

Yuanmeng

This concept art shows China’s 18,000 cubic meter Yuanmeng airship 20km above the ground (and for some reason, off the coast of the Mid Atlantic U.S.). One of the highest flying airships, the Yuanmeng can provide wide area surveillance and communications capability.

cannews.com

For its near-space platform, the quantum radar will be installed on either a high altitude blimp or a very high altitude UAV. In this role, quantum radar would be a strategic warning system against enemy ballistic missiles and detection system against high-speed aircraft like the SR-72. For space surveillance missions, it could provide high-fidelity details on classified systems such as spy satellites and space planes like the X-37B—possibly including payload details.

Source: China’s latest quantum radar could help detect stealth planes, missiles | Popular Science