Some of Africa’s oldest and biggest baobab trees have abruptly died, wholly or in part, in the past decade, according to researchers.
The trees, aged between 1,100 and 2,500 years and in some cases as wide as a bus is long, may have fallen victim to climate change, the team speculated.
“We report that nine of the 13 oldest … individuals have died, or at least their oldest parts/stems have collapsed and died, over the past 12 years,” they wrote in the scientific journal Nature Plants, describing “an event of an unprecedented magnitude”.
“It is definitely shocking and dramatic to experience during our lifetime the demise of so many trees with millennial ages,” said the study’s co-author Adrian Patrut of the Babeș-Bolyai University in Romania.
Among the nine were four of the largest African baobabs. While the cause of the die-off remains unclear, the researchers “suspect that the demise of monumental baobabs may be associated at least in part with significant modifications of climate conditions that affect southern Africa in particular”.
Further research is needed, said the team from Romania, South Africa and the United States, “to support or refute this supposition”.
Between 2005 and 2017, the researchers probed and dated “practically all known very large and potentially old” African baobabs – more than 60 individuals in all. Collating data on girth, height, wood volume and age, they noted the “unexpected and intriguing fact” that most of the very oldest and biggest trees died during the study period. All were in southern Africa – Zimbabwe, Namibia, South Africa, Botswana, and Zambia.
The baobab is the biggest and longest-living flowering tree, according to the research team. It is found naturally in Africa’s savannah region and outside the continent in tropical areas to which it was introduced. It is a strange-looking plant, with branches resembling gnarled roots reaching for the sky, giving it an upside-down look.
A new piece of software has been trained to use wifi signals — which pass through walls, but bounce off living tissue — to monitor the movements, breathing, and heartbeats of humans on the other side of those walls. The researchers say this new tech’s promise lies in areas like remote healthcare, particularly elder care, but it’s hard to ignore slightly more dystopian applications.
[…]
“We actually are tracking 14 different joints on the body … the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet,” Katabi said. “So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you — and that’s something new that was not possible before.”
An animation created by the RF-Pose software as it translates a wifi signal into a visual of human motion behind a wall.
The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.
[…]
the team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side.
In these scenarios, a deep-learning machine is given the rules of the game and then plays against itself. Crucially, it is rewarded at each step according to how it performs. This reward process is hugely important because it helps the machine to distinguish good play from bad play. In other words, it helps the machine learn.
But this doesn’t work in many real-world situations, because rewards are often rare or hard to determine.
For example, random turns of a Rubik’s Cube cannot easily be rewarded, since it is hard to judge whether the new configuration is any closer to a solution. And a sequence of random turns can go on for a long time without reaching a solution, so the end-state reward can only be offered rarely.
In chess, by contrast, there is a relatively large search space but each move can be evaluated and rewarded accordingly. That just isn’t the case for the Rubik’s Cube.
Enter Stephen McAleer and colleagues from the University of California, Irvine. These guys have pioneered a new kind of deep-learning technique, called “autodidactic iteration,” that can teach itself to solve a Rubik’s Cube with no human assistance. The trick that McAleer and co have mastered is to find a way for the machine to create its own system of rewards.
Here’s how it works. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move.
Autodidactic iteration does this by starting with the finished cube and working backwards to find a configuration that is similar to the proposed move. This process is not perfect, but deep learning helps the system figure out which moves are generally better than others.
Having been trained, the network then uses a standard search tree to hunt for suggested moves for each configuration.
The result is an algorithm that performs remarkably well. “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves—less than or equal to solvers that employ human domain knowledge,” say McAleer and co.
That’s interesting because it has implications for a variety of other tasks that deep learning has struggled with, including puzzles like Sokoban, games like Montezuma’s Revenge, and problems like prime number factorization.
Indeed, McAleer and co have other goals in their sights: “We are working on extending this method to find approximate solutions to other combinatorial optimization problems such as prediction of protein tertiary structure.”