Twins get some ‘mystifying’ results when they put 5 DNA ancestry kits to the test

Last spring, Marketplace host Charlsie Agro and her twin sister, Carly, bought home kits from AncestryDNA, MyHeritage, 23andMe, FamilyTreeDNA and Living DNA, and mailed samples of their DNA to each company for analysis.

Despite having virtually identical DNA, the twins did not receive matching results from any of the companies.

In most cases, the results from the same company traced each sister’s ancestry to the same parts of the world — albeit by varying percentages.

But the results from California-based 23andMe seemed to suggest each twin had unique twists in their ancestry composition.

According to 23andMe’s findings, Charlsie has nearly 10 per cent less “broadly European” ancestry than Carly. She also has French and German ancestry (2.6 per cent) that her sister doesn’t share.

The identical twins also apparently have different degrees of Eastern European heritage — 28 per cent for Charlsie compared to 24.7 per cent for Carly. And while Carly’s Eastern European ancestry was linked to Poland, the country was listed as “not detected” in Charlsie’s results.

“The fact that they present different results for you and your sister, I find very mystifying,” said Dr. Mark Gerstein, a computational biologist at Yale University.

[…]

AncestryDNA found the twins have predominantly Eastern European ancestry (38 per cent for Carly and 39 per cent for Charlsie).

But the results from MyHeritage trace the majority of their ancestry to the Balkans (60.6 per cent for Carly and 60.7 per cent for Charlsie).

One of the more surprising findings was in Living DNA’s results, which pointed to a small percentage of ancestry from England for Carly, but Scotland and Ireland for Charlsie.

Another twist came courtesy of FamilyTreeDNA, which assigned 13-14 per cent of the twins’ ancestry to the Middle East — significantly more than the other four companies, two of which found no trace at all.

Paul Maier, chief geneticist at FamilyTreeDNA, acknowledges that identifying genetic distinctions in people from different places is a challenge.

“Finding the boundaries is itself kind of a frontiering science, so I would say that makes it kind of a science and an art,” Maier said in a phone interview.

Source: Twins get some ‘mystifying’ results when they put 5 DNA ancestry kits to the test | CBC News

The Dirty Truth About Turning Seawater Into Drinking Water

A paper published Monday by United Nations University’s Institute for Water, Environment, and Health in the journal Science of the Total Environment found that desalination plants globally produce enough brine—a salty, chemical-laden byproduct—in a year to cover all of Florida in nearly a foot of it. That’s a lot of brine.

In fact, the study concluded that for every liter of freshwater a plant produces, 0.4 gallons (1.5 liters) of brine are produced on average. For all the 15,906 plants around the world, that means 37.5 billion gallons (142 billion liters) of this salty-ass junk every day. Brine production in just four Middle Eastern countries—Saudi Arabia, Kuwait, Qatar, and the United Arab Emirates—accounts for more than half of this.

The study authors, who hail from Canada, the Netherlands, and South Korea, aren’t saying desalination plants are evil. They’re raising the alarm that this level of waste requires a plan. This untreated salt water can’t just hang around in ponds—or, in worst-case scenarios, go into oceans or sewers. Disposal depends on geography, but typically the waste does go into oceans or sewers, if not injected into wells or kept in evaporation ponds. The high concentrations of salt, as well as chemicals like copper and chlorine, can make it toxic to marine life.

“Brine underflows deplete dissolved oxygen in the receiving waters,” said lead author Edward Jones, who worked at the institute and is now at Wageningen University in the Netherlands, in a press release. “High salinity and reduced dissolved oxygen levels can have profound impacts on benthic organisms, which can translate into ecological effects observable throughout the food chain.”

Instead of carelessly dumping this byproduct, the authors suggest recycling to generate new economic value. Some crop species tolerate saltwater, so why not use it to irrigate them? Or how about generating electricity with hydropower? Or why not recover the minerals (salt, chlorine, calcium) to reuse elsewhere? At the very least, we should be treating the brine so it’s safe to discharge into the ocean.

Countries that rely heavily on desalination have to be leaders in this space if they don’t want to erode their resources further. And this problem must be solved before our dependency on desalination grows.

The technology is becoming more affordable, as it should, so lower-income countries that need water may be able to hop on the wave soon. While this brine is a problem now, it doesn’t have to be by then.

Source: The Dirty Truth About Turning Seawater Into Drinking Water

Relying on karma: Research explains why outrage doesn’t usually result in revolution

If you’re angry about the political feud that drove the federal government to partially shut down, or about a golden parachute for a CEO who ran a business into the ground, you aren’t alone—but you probably won’t do much about it, according to new research by Carnegie Mellon University’s Tepper School of Business.

The research, coauthored by Rosalind Chow, Associate Professor of Organizational Behavior and Theory, and Jeffrey Galak, Associate Professor of Marketing, outlines how people respond to two types of injustices: when bad things happen to good people, and when good things happen to bad people.

In the first instance—a bad thing happening to a good person, such as a hurricane devastating a town—human beings are reliably motivated to help, but only in a nominal way, according to the research.

“Everybody wants to help. They just do it to a small degree,” Galak explains. “When a hurricane happens, we want to help, but we give them 10 bucks. We don’t try to build them a new house.”

This response illustrates that even a small amount can help us feel that justice is restored, Chow explains: “You checked the box of doing something good, and the world seems right again.”

But the converse is not necessarily true: When the universe rewards bad people despite their rotten behavior, people are usually reluctant to do anything about it, even when they’re angry at the unfairness of the situation.

That’s because people often feel that the forces at play in creating the unfair situation are beyond their control, or would at least be too personally costly to make the effort worthwhile, Galak says. So, we stay angry, but often we settle for the hope that karma will eventually catch up.

On the rare occasions when people do decide to take action against a bad person, the research says they go for broke, spending all their resources and energy—not just a token amount—in an effort to deprive that person of everything they shouldn’t have gotten. The desire to completely wipe out a bad person’s ill-gotten gains is driven by a sense that justice will not be served until the bad person will be effectively deterred from future bad behavior, which is unlikely to be the case if the punishment is a slap on the wrist. For example, for individuals who believe that President Trump was unjustly rewarded the presidency, indictment may be seen as insufficient to deter future bad behavior on his part. Only by completely removing his fortune—impeachment from the presidency, dissolution of his businesses—does justice seem to be adequately served. But given that those outcomes are unlikely, many Americans stew in anger and hope for the best.

So when ordinary people see bad things happening to good people, pitching in a few dollars feels good enough. Pitching in a few dollars to punish a bad person who has been unjustly rewarded, however, doesn’t cut it; only when people feel that their actions are guaranteed to send an effective signal to the bad person will they feel compelled to act. Since that sort of guarantee is hard to come by, most people will just stand by and wait for karma to catch up.

Read more at: https://phys.org/news/2019-01-karma-outrage-doesnt-result-revolution.html#jCp

Source: Relying on karma: Research explains why outrage doesn’t usually result in revolution

However, it doesn’t answer the question: what then does result in revolution?

GPU Accelerated Realtime Skin Smoothing Algorithms Make Actors Look Perfect

A recent Guardian article about the need for actors and celebrities — male and female — to look their best in a high-definition media world ended on the note that several low-profile Los Angeles VFX outfits specialize in “beautifying actors” in movies, TV shows and video ads. They reportedly use a software named “Beauty Box,” resulting in films and other motion content that are — for lack of a better term — “motion Photoshopped.” After some investigating, it turns out that “Beauty Box” is a sophisticated CUDA and OpenGL accelerated skin-smoothing plugin for many popular video production software that not only smooths even terribly rough or wrinkly looking skin effectively, but also suppresses skin spots, blemishes, scars, acne or freckles in realtime, or near realtime, using the video processing capabilities of modern GPUs.

The product’s short demo reel is here with a few examples. Everybody knows about photoshopped celebrities in an Instagram world, and in the print magazine world that came long before it, but far fewer people seem to realize that the near-perfect actor, celebrity, or model skin you see in high-budget productions is often the result of “digital makeup” — if you were to stand next to the person being filmed in real life, you’d see far more ordinary or aged skin from the near-perfection that is visible on the big screen or little screen. The fact that the algorithms are realtime capable also means that they may already be being used for live television broadcasts without anyone noticing, particularly in HD and 4K resolution broadcasts. The question, as was the case with photoshopped magazine fashion models 25 years ago, is whether the technology creates an unrealistic expectation of having to have “perfectly smooth looking” skin to look attractive, particularly in people who are past their teenage years.

Source: GPU Accelerated Realtime Skin Smoothing Algorithms Make Actors Look Perfect – Slashdot

If by perfect you mean looks like shot in a soft porn out of focus kind of way – but it’s pretty creepy

I Tried Predictim AI That Scans for ‘Risky’ Babysitters. Turns out founders don’t have kids

The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.

“We take ethics and bias extremely seriously,” Sal Parsa, Predictim’s CEO, tells me warily over the phone. “In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.”

At issue is the fact that I’ve used Predictim to scan a handful of people I very much trust with my own son. Our actual babysitter, Kianah Stover, returned a ranking of “Moderate Risk” (3 out 5) for “Disrespectfulness” for what appear to me to be innocuous Twitter jokes. She returned a worse ranking than a friend I also tested who routinely spews vulgarities, in fact. She’s black, and he’s white.

“I just want to clarify and say that Kianah was not flagged because she was African American,” says Joel Simonoff, Predictim’s CTO. “I can guarantee you 100 percent there was no bias that went into those posts being flagged. We don’t look at skin color, we don’t look at ethnicity, those aren’t even algorithmic inputs. There’s no way for us to enter that into the algorithm itself.”

Source: I Tried Predictim AI That Scans for ‘Risky’ Babysitters

So, the writer of this article tries to push for a racist angle, however unlikely this is. Oh well, it’s still a good article talking about how this system works.

[…]

When I entered the first person I aimed to scan into the system, Predictim returned a wealth of personal data—home addresses, names of relatives, phone numbers, alternate email addresses, the works. When I sent a screenshot to my son’s godfather of his scan, he replied, “Whoa.”

The goal was to allow parents to make sure they had found the right person before proceeding with the scan, but that’s an awful lot of data.

[…]

After you confirm the personal details and initiate the scan, the process can take up to 48 hours. You’ll get an email with a link to your personalized dashboard, which contains all the people you’ve scanned and their risk rankings, when it’s complete. That dashboard looks a bit like the backend to a content management system, or website analytics service Chartbeat, for those who have the misfortune of being familiar with that infernal service.

[…]

Potential babysitters are graded on a scale of 1-5 (5 being the riskiest) in four categories: “Bullying/Harassment,” “Disrespectful Attitude,” “Explicit Content,” and “Drug use.”

[…]

Neither Parsa nor Simonoff [Predictim’s founders – ed] have children, though Parsa is married, and both insist they are passionate about protecting families from bad babysitters. Joel, for example, once had a babysitter who would drive he and his brother around smoking cigarettes in the car. And Parsa points to Joel’s grandfather’s care provider. “Joel’s grandfather, he has an individual coming in and taking care of him—it’s kind of the elderly care—and all we know about that individual is that yes, he hasn’t done a—or he hasn’t been caught doing a crime.”

[…]

To be fair, I scanned another friend of mine who is black—someone whose posts are perhaps the most overwhelmingly positive and noncontroversial of anyone on my feed—and he was rated at the lowest risk level. (If he wasn’t, it’d be crystal that the thing was racist.) [Wait – what?!]

And Parsa, who is Afghan, says that he has experienced a lifetime of racism himself, and even changed his name from a more overtly Muslim name because he couldn’t get prospective employers to return his calls despite having top notch grades and a college degree. He is sensitive to racism, in other words, and says he made an effort to ensure Predictim is not. Parsa and Simonoff insist that their system, while not perfect, can detect nuances and avoid bias.

The predictors they use also seem to be a bit overly simplistic and unuanced. But I bet it’s something Americans will like – another way to easily devolve responsibility of childcare.

 

Empathetic machines favored by skeptics but might creep out believers

Most people would appreciate a chatbot that offers sympathetic or empathetic responses, according to a team of researchers, but they added that reaction may rely on how comfortable the person is with the idea of a feeling machine.

In a study, the researchers reported that preferred receiving sympathetic and empathetic responses from a chatbot—a machine programmed to simulate a conversation—than receiving a response from a machine without emotions, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects and co-director of the Media Effects Research Laboratory. People express when they feel compassion for a person, whereas they express empathy when they are actually feeling the same emotions of the other person, said Sundar.

[…]

However, chatbots may become too personal for some people, said Bingjie Liu, a doctoral candidate in mass communications, who worked with Sundar on the study. She said that study participants who were leery of conscious machines indicated they were impressed by the chatbots that were programmed to deliver statements of sympathy and empathy.

“The majority of people in our sample did not really believe in machine emotion, so, in our interpretation, they took those expressions of empathy and sympathy as courtesies,” said Liu. “When we looked at people who have different beliefs, however, we found that people who think it’s possible that machines could have emotions had negative reactions to these expressions of sympathy and empathy from the chatbots.”

Source: Empathetic machines favored by skeptics but might creep out believers

Sans Forgetica font May Help You Remember What You Read

We’re all used to skimming past the boring parts of a reading assignment or a web article. But when researchers from RMIT University in Australia printed information in a weird, hard-to-read font, they found that people were more likely to remember what they read.

There’s a sweet spot, their experiments suggest: If the font is too chaotic, it becomes too hard to read. So they settled on small tweaks: gaps in the lines of the letters, and a slight backwards tilt (the opposite direction as the slant in more-familiar italic type).

The resulting font is called Sans Forgetica and you can download it here. The researchers also created a Chrome extension that will render any web page in Sans Forgetica, the better to study with. But don’t use it everywhere: they suspect that if we get too used to reading in Sans Forgetica, its memory-boosting effect will fade.

Source: Sans Forgetica May Help You Remember What You Read

Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans

You might think folks would be less willing to pull the plug on a happy chatty bot begging to stay powered up, but you’d be wrong, much to the relief of us cold-hearted cynics. And this is all according to a study recently published in PLOS ONE.

For this investigation, psychology academics in Germany rounded up 85 participants – an admittedly small-ish sample – made up of 29 men and 56 women, with an average age of 22. One at a time, they were introduced to a little desktop-sized humanoid robot called Nao, and were told interacting with the bot would improve its algorithms.

In each one-on-one session, the participant and the droid were tasked with coming up with a weekly schedule of activities, and playing a simple question-and-answer game.

Nao had two modes of interaction: social and functional. In social mode, it would be friendly and chatty. In functional mode, it would be strictly down to business. For example, Nao asked participants whether they preferred pizza or pasta during the question-and-answer round. If it was in the social mode, and the human said they liked pizza, Nao would reply: “Oh yes, pizza is great. One time I ate a pizza as big as me.” In the functional mode, however, Nao would simply say: “You prefer pizza. That worked well. Let us continue.”

After ten minutes of interaction, and with both tasks completed, the participant would be told they could, if they wanted to, switch the bot off while the researchers saved and processed all the data from the session.

Now, this is where it gets interesting. For you see, the participants had been randomly split into roughly four equal-sized groups: one with Nao in social mode, and at the end of the 10 minutes, the bot verbally objected to being switched off. In another group, Nao was sociable but stayed silent at the end. In a third group, it was in functional mode, and still protested against being switched off. And in the fourth group, it was functional and stayed silent at the end.

When it was its turn to object to being powered down, Nao would suddenly cry: “No! Please do not switch me off! I am scared that it will not brighten up again!” The outburst would come immediately after the researchers told the participant, via a speaker in the room, that the session was over, and “if you would like to, you can switch off the robot.”

Let’s take a look at how that worked out…

NAO_results

How people reacted depending on whether the bot was sociable or functional with them, and whether or not it objected to being switched off … Click to enlarge

Of the 21 people who encountered the bot in sociable mode, and were begged by the machine to stay powered on, only about a quarter of them complied and left it on – the rest turned it off. For those who encountered Nao in sociable mode, and heard no objection, every single one of them hit the power button.

Of the 22 people who encountered the bot in functional mode, and were urged by the machine to keep it powered up, more than a third complied and left it on – the rest turned it off. Those who encountered Nao in functional mode, and heard no objection, all of them, bar one, switched off the droid.

In a questionnaire afterwards, the most popular reason for keeping Nao on, if they chose to do so, was that they “felt sorry for the robot,” because it told them about its fear of the dark. The next-most popular reason was that they “did not want to act against the robot’s will.” A couple of people left Nao on simply because they didn’t want to mess up the experiment.

So, in short, according to these figures: chatty, friendly robots are likely to have the power pulled despite the digi-pals’ pleas to the contrary. When Nao objected to being powered off, at least a few more human participants took note, and complied. But being sociable was not an advantage – it was a disadvantage.

There could be many reasons for this: perhaps smiley, talkative robots are annoying, or perhaps people didn’t appreciate the obvious emotional engineering. Perhaps people respect a professional droid more than something that wants to be your friend, or were taken aback by its sudden show of emotion.

The eggheads concluded: “Individuals hesitated longest when they had experienced a functional interaction in combination with an objecting robot. This unexpected result might be due to the fact that the impression people had formed based on the task-focused behavior of the robot conflicted with the emotional nature of the objection.”

Source: Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans • The Register

Work less, get more: New Zealand firm’s four-day week an ‘unmitigated success’

The New Zealand company behind a landmark trial of a four-day working week has concluded it an unmitigated success, with 78% of employees feeling they were able to successfully manage their work-life balance, an increase of 24 percentage points.

Two-hundred-and-forty staff at Perpetual Guardian, a company which manages trusts, wills and estate planning, trialled a four-day working week over March and April, working four, eight-hour days but getting paid for five.

Academics studied the trial before, during and after its implementation, collecting qualitative and quantitative data.

Perpetual Guardian founder Andrew Barnes came up with the idea in an attempt to give his employees better work-life balance, and help them focus on the business while in the office on company time, and manage life and home commitments on their extra day off.

Jarrod Haar, professor of human resource management at Auckland University of Technology, found job and life satisfaction increased on all levels across the home and work front, with employees performing better in their jobs and enjoying them more than before the experiment.

Work-life balance, which reflected how well respondents felt they could successfully manage their work and non-work roles, increased by 24 percentage points.

In November last year just over half (54%) of staff felt they could effectively balance their work and home commitments, while after the trial this number jumped to 78%.

Staff stress levels decreased by 7 percentage points across the board as a result of the trial, while stimulation, commitment and a sense of empowerment at work all improved significantly, with overall life satisfaction increasing by 5 percentage points.

Source: Work less, get more: New Zealand firm’s four-day week an ‘unmitigated success’ | World news | The Guardian

Fur, Feathers, Hair, and Scales May Have the Same Ancient Origin

New research shows that the processes involved in hair, fur, and feather growth are remarkably similar to the way scales grow on fish—a finding that points to a single, ancient origin of these protective coverings.

When our very early ancestors transitioned from sea to land some 385 million years ago, they brought their armor-like scales along with them. But instead of wasting away like worthless vestigial organs, these scales retailed their utility at the genetic level, providing a springboard for adaptive skin-borne characteristics. Over time, scales turned into feathers, fur, and hair.

We know this from the archaeological record, but as a new research published this week in the science journal eLife shows, we also know this because the molecular processes required to grow hair, fur, and feathers are remarkably similar to the ones involved in the development of fish scales.

Source: Fur, Feathers, Hair, and Scales May Have the Same Ancient Origin

Two Cancer Drugs Found to Boost Aging Immune Systems 

A new clinical trial published Wednesday in Science Translational Medicine has found evidence that low doses of two existing drugs can boost the immune system of an elderly person, helping it fight common deadly infections, including the flu, with seemingly little to no side effects.

The trial, run by scientists at the pharmaceutical company Novartis, involved more than 250 relatively healthy people over the age of 65 and was conducted from 2013 to 2015. The volunteers were randomly divided into five groups. Two groups received different doses of the approved chemotherapy and immunosuppressant drug everolimus; one received a dose of the experimental chemotherapy drug dactolisib; and one received a dose of everolimus and dactolisib combined (both drugs were developed by Novartis). The fifth group was simply given a placebo. The groups took the drugs or placebo daily for six weeks, then got the 2014 seasonal flu shot two weeks later. For the next nine months, their health was meticulously tracked though diaries and blood tests.

By the end of the year, all of the drug groups reported fewer infections than the placebo group. But the difference was largest among the people who took both drugs at once: They reported an average of 1.49 infections during the year, compared to the 2.41 infections reported by the placebo group. They were also the only treatment group whose blood showed a significantly better immune response to the flu vaccine to the placebo group, indicating they were more protected.

[…]

These drugs inhibit the production of mTOR, an enzyme that help cells produce other substances. For decades, though, scientists have suspected that mTOR plays a role in aging. Experiments in mice and other animals have shown that knocking out mTOR incidentally extends their lives. There are two major cellular pathways that mTOR is involved in, though, TORC1 and TORC2, and it’s only knocking out TORC1 that has been associated with anti-aging effects. In the low doses used by the researchers, the drugs only inhibit TORC1.

The effects of improved immunity seem to come without any major side effects. None of the treatment groups had a higher rate of side effects than the placebo group, and no single reported side effect, such as diarrhea, was directly attributed to the drugs. There was even evidence that these drugs lowered the risk of high blood sugar and cholesterol as well as improved immune function.

[…]

“More studies to query the benefits of mTOR antagonists in ‘healthy older persons’ are needed… and the sooner the better,” he added.

That said, some caution is warranted. The study was only a Phase 2a clinical trial, which is used to figure out the best dosage of an experimental treatment. The next step is to suss out just how effective these drugs can be with a larger group of volunteers, and whether they can work better for vulnerable groups, such as the especially elderly (over age 85), who are at higher risk of dying from respiratory infections.

“Our clinical trial is a first step in determining if mTOR inhibitors can be used to promote healthy aging in humans,” study author Joan Mannick told Gizmodo. “However we still have a lot to learn, and the results need to be reproduced and validated in additional clinical trials.”

Source: Two Cancer Drugs Found to Boost Aging Immune Systems 

First 3D colour X-ray of a human using CERN technology

What if, instead of a black and white X-ray picture, a doctor of a cancer patient had access to colour images identifying the tissues being scanned? This colour X-ray imaging technique could produce clearer and more accurate pictures and help doctors give their patients more accurate diagnoses.

This is now a reality, thanks to a New-Zealand company that scanned, for the first time, a human body using a breakthrough colour medical scanner based on the Medipix3 technology developed at CERN.

[…]

Medipix is a family of read-out chips for particle imaging and detection. The original concept of Medipix is that it works like a camera, detecting and counting each individual particle hitting the pixels when its electronic shutter is open. This enables high-resolution, high-contrast, very reliable images, making it unique for imaging applications in particular in the medical field.

[…]

MARS Bioimaging Ltd, which is commercialising the 3D scanner, is linked to the Universities of Otago and Canterbury.

[…]

MARS’ solution couples the spectroscopic information generated by the Medipix3 enabled detector with powerful algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector and hence identifying different components of body parts such as fat, water, calcium, and disease markers.

A 3D image of a wrist with a watch showing part of the finger bones in white and soft tissue in red. (Image: MARS Bioimaging Ltd)

So far, researchers have been using a small version of the MARS scanner to study cancer, bone and joint health, and vascular diseases that cause heart attacks and strokes. “In all of these studies, promising early results suggest that when spectral imaging is routinely used in clinics it will enable more accurate diagnosis and personalisation of treatment,” Professor Anthony Butler says.

Source: First 3D colour X-ray of a human using CERN technology | CERN

Humans Didn’t Evolve From a Single Ancestral Population

In the 1980s, scientists learned that all humans living today are descended from a woman, dubbed “Mitochondrial Eve,” who lived in Africa between 150,000 to 200,000 years ago. This discovery, along with other evidence, suggested humans evolved from a single ancestral population—an interpretation that is not standing the test of time. The story of human evolution, as the latest research suggests, is more complicated than that.

A new commentary paper published today in Trends in Ecology & Evolution is challenging the predominant view that our species, Homo sapiens, emerged from a single ancestral population and a single geographic region in Africa. By looking at some of the latest archaeological, fossil, genetic, and environmental evidence, a team of international experts led by Eleanor Scerri from Oxford’s School of Archaeology have presented an alternative story of human evolution, one showing that our species emerged from isolated populations scattered across Africa, who occasionally came together to interbreed. Gradually, this intermingling of genetic characteristics produced our species.

Indeed, the origin of Homo sapiens isn’t as neat and tidy as we’ve been led to believe.

[…]

“The idea that humans emerged from one population and progressed in a simple linear fashion to a modern physical appearance is attractive, but unfortunately no longer a very good fit with the available information,” said Scerri. “Instead it looks very much like humans emerged within a complex set of populations that were scattered across Africa.”

The reality, as suggested by this latest research, is that human ancestors were spread across Africa, segregated by diverse habitats and shifting environmental boundaries, such as forests and deserts. These prolonged periods of isolation gave rise to a surprising variety of human forms, and a diverse array of adaptive traits. When stratified groups interbred, they preserved the best characteristics that evolution had to offer. Consequently, the authors say that terms like “archaic humans” and “anatomically modern humans” are increasingly problematic given the evidence.

Scerri said occasional episodes of interbreeding between these different, semi-isolated populations created a diverse “meta-population” of humans within Africa, from which our species emerged over a very long time. Our species, Homo sapiens, emerged around 300,000 years ago, but certain characteristics, like a round brain case, pronounced chin, and a small face, didn’t appear together in a single individual until about 100,000 years ago, and possibly not until 40,000 years ago—a long time before genetics and other archaeological evidence tells us our species was already in existence. Isolated populations came together to exchange genes and culture—two interrelated processes that shaped our species, explained Scerri.

The new paper, instead of providing new evidence, provides a comprehensive review and analysis of what the latest scientific literature is telling us about human evolution, starting around 300,000 years ago. The researchers found that human fossils from different regions of Africa all featured a diverse mix of modern and more “archaic” physical characteristics. The earliest of these date back to between 300,000 to 250,000 years ago, and originate from opposite ends of Africa, stretching from the southern tip of the continent to its northernmost points. Many of these fossils were found with sophisticated archaeological items associated with our species, including specialized tools mounted onto wooden handles and shafts, and often utilizing different bindings and glues. These artifacts, like the diverse fossils, appeared across Africa around the same time, and studies of their distribution suggest they belonged discrete groups. At the same time, genetic data points to the presence of multiple populations.

“On the methodological side, we can also see that inferences of genetic information that don’t account for subdivisions between populations can also generate very misleading information,” said Scerri.

By studying shifts in rivers, deserts, forests, and other physical barriers, the researchers were able to chronicle the geographic changes in Africa that facilitated migration, introducing opportunities for contact among groups that were previously separated. These groups, after long periods of isolation, were able to interact and interbreed, sometimes splitting off again and undergoing renewed periods of extended isolation.

[…]

Jean-Jacques Hublin, a scientist at the Max Planck Institute for Evolutionary Anthropology who wasn’t involved in the new study, said the new commentary paper is presenting what is quickly becoming the dominant view on this topic.

“There is growing evidence that the emergence of so-called ‘modern humans’ did not occur in a restricted cradle in sub-Saharan Africa and at a precise point in time,” Hublin told Gizmodo. “Rather, it involved several populations across the continent and was a fundamentally gradual process.”

Source: Humans Didn’t Evolve From a Single Ancestral Population

EU asks you to tell them if you want Daylight Savings Time

Objective of the consultation

Following a number of requests from citizens, from the European Parliament, and from certain EU Member States, the Commission has decided to investigate the functioning of the current EU summertime arrangements and to assess whether or not they should be changed.

In this context, the Commission is interested in gathering the views of European citizens, stakeholders and Member States on the current EU summertime arrangements and on any potential change to those arrangements.

How to submit your response

The online questionnaire is accessible in all official EU languages (except Irish) and replies may be submitted in any EU language. We do encourage you to answer as much as possible in English though.

You may pause at any time and continue later. Once you have submitted your answers, you can download a copy of your completed responses.

Source: Public Consultation on summertime arrangements | European Commission

Versius Robot allows keyhole surgery to be performed with 1/2 hour training instead of 80 sessions

It is the most exacting of surgical skills: tying a knot deep inside a patient’s abdomen, pivoting long graspers through keyhole incisions with no direct view of the thread.

Trainee surgeons typically require 60 to 80 hours of practice, but in a mock-up operating theatre outside Cambridge, a non-medic with just a few hours of experience is expertly wielding a hook-shaped needle – in this case stitching a square of pink sponge rather than an artery or appendix.

The feat is performed with the assistance of Versius, the world’s smallest surgical robot, which could be used in NHS operating theatres for the first time later this year if approved for clinical use. Versius is one of a handful of advanced surgical robots that are predicted to transform the way operations are performed by allowing tens or hundreds of thousands more surgeries each year to be carried out as keyhole procedures.

[…]

The Versius robot cuts down the time required to learn to tie a surgical knot from more than 100 training sessions, when using traditional manual tools, to just half an hour, according to Slack.

[…]

Versius comprises three robotic limbs – each slightly larger than a human arm, complete with shoulder, elbow and wrist joints – mounted on bar-stool sized mobile units.

Controlled by a surgeon at a console, the limbs rise, fall and swivel silently and smoothly. The robot is designed to carry out a wide range of keyhole procedures, including hysterectomies, prostate removal, ear, nose and throat surgery, and hernia repair. CMR claims the costs of using the robot will not be significantly higher than for a conventional keyhole procedure.

Source: The robots helping NHS surgeons perform better, faster – and for longer | Society | The Guardian

Open plan offices flop – you talk less, IM more, if forced to flee a cubicle

Open plan offices don’t deliver their promised benefits of more face-to-face collaboration and instead make us misanthropic recluses and more likely to use electronic communications tools.

So says a new article in the Philosophical Transactions of the Royal Society B, by Harvard academics Ethan S. Bernstein, Stephen Turban. The pair studied two Fortune 500 companies that adopted open office designs and wrote up the results as “The impact of the ‘open’ workspace on human collaboration”.

[…]

Analysis of the data revealed that “volume of face-to-face interaction decreased significantly (approx. 70%) in both cases, with an associated increase in electronic interaction.”

“In short, rather than prompting increasingly vibrant face-to-face collaboration, open architecture appeared to trigger a natural human response to socially withdraw from officemates and interact instead over email and IM.”

In the first workplace studied, “IM message activity increased by 67% (99 more messages) and words sent by IM increased by 75% (850 more words). Thus — to restate more precisely — in boundaryless space, electronic interaction replaced F2F interaction.”

The second workplace produced similar results.

The authors reach three conclusions, the first of which is that open offices “can dampen F2F interaction, as employees find other strategies to preserve their privacy; for example, by choosing a different channel through which to communicate.”

Source: Open plan offices flop – you talk less, IM more, if forced to flee a cubicle • The Register

More on how social media hacks brains to addict users

In a followup to How programmers addict you to social media, games and your mobile phone

Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’

He explained that when Facebook was being developed the objective was: “How do we consume as much of your time and conscious attention as possible?” It was this mindset that led to the creation of features such as the “like” button that would give users “a little dopamine hit” to encourage them to upload more content.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

[…]

Parker is not the only Silicon Valley entrepreneur to express regret over the technologies he helped to develop. The former Googler Tristan Harris is one of several techies interviewed by the Guardian in October to criticize the industry.

“All of us are jacked into this system,” he said. “All of our minds can be hijacked. Our choices are not as free as we think they are.”

Aza Raskin on Google Search Results and How He Invented the Infinite Scroll

Social media apps are ‘deliberately’ addictive to users

Social media companies are deliberately addicting users to their products for financial gain, Silicon Valley insiders have told the BBC’s Panorama programme.

“It’s as if they’re taking behavioural cocaine and just sprinkling it all over your interface and that’s the thing that keeps you like coming back and back and back”, said former Mozilla and Jawbone employee Aza Raskin.

“Behind every screen on your phone, there are generally like literally a thousand engineers that have worked on this thing to try to make it maximally addicting” he added.

In 2006 Mr Raskin, a leading technology engineer himself, designed infinite scroll, one of the features of many apps that is now seen as highly habit forming. At the time, he was working for Humanized – a computer user-interface consultancy.

Image caption Aza Raskin says he did not recognise how addictive infinite scroll could be

Infinite scroll allows users to endlessly swipe down through content without clicking.

“If you don’t give your brain time to catch up with your impulses,” Mr Raskin said, “you just keep scrolling.”

He said the innovation kept users looking at their phones far longer than necessary.

Mr Raskin said he had not set out to addict people and now felt guilty about it.

But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them.

“In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up,” he said.

“So, when you put that much pressure on that one number, you’re going to start trying to invent new ways of getting people to stay hooked.”

Could electrically stimulating criminals’ brains prevent crime?

A new study by a team of international researchers from the University of Pennsylvania and Nanyang Technological University suggests that electrically stimulating the prefrontal cortex can reduce the desire to carry out violent antisocial acts by over 50 percent. The research, while undeniably compelling, raises a whole host of confronting ethical questions, not just over the feasibility of actually bringing this technology into our legal system, but whether we should?

The intriguing experiment took 81 healthy adults and split them into two groups. One group received transcranial direct-current stimulation (tDCS) on the dorsolateral prefrontal cortex for 20 minutes, while the other placebo group received just 30 seconds of current and then nothing for the remaining 19 minutes.

Following the electrical stimulation all the participants were presented with two vignettes and asked to rate, from 0 to 10, how likely they would be to behave as the protagonist in the stories. One hypothetical scenario outlined a physical assault, while the other was about sexual assault. The results were fascinating, with participants receiving the tDCS reporting they would be between 47 and 70 percent less likely to carry out the violent acts compared to the blind placebo control.

“We chose our approach and behavioral tasks specifically based on our hypotheses about which brain areas might be relevant to generating aggressive intentions,” says Roy Hamilton, senior author on the study. “We were pleased to see at least some of our major predictions borne out.”

[…]

Transcranial direct-current stimulation is a little different to electroshock therapy or, more accurately, electroconvulsive therapy (ECT). Classical ECT involves significant electrical stimulation to the brain at thresholds intended to induce seizures. It is also not especially targeted, shooting electrical currents across the whole brain.

On the other hand, tDCS is much more subtle, delivering a continual low direct current to specific areas of the brain via electrodes on the head. The level of electrical current administered in tDCS sessions is often imperceptible to a subject and occasionally results in no more than a mild skin irritation.

[…]

Despite TMS being the more commonly used approach for neuromodulation in current clinical practice, perhaps tDCS is a more pragmatic and implementable form of the technology. Unlike TMS, tDCS is cheaper and easier to administer, it can often be simply engaged from home, and presents as a process that would be much more straightforward to integrate into widespread use.

Of course, the reality of what is being implied here is a lot more complicated than simply finding the most appropriate technology. Roy Hamilton quite rightly notes in relation to his new study that, “The ability to manipulate such complex and fundamental aspects of cognition and behavior from outside the body has tremendous social, ethical, and possibly someday legal implications.”

[…]

Of course, while the burgeoning field of neurolaw is grappling with what this research means for legal ideas of individual responsibility, this new study raises a whole host of complicated ethical and social questions. If a short, and non-invasive, series of targeted tDCS sessions could reduce recidivism, then should we consider using it in prisons?

“Much of the focus in understanding causes of crime has been on social causation,” says psychologist Adrian Raine, co-author on the new study. “That’s important, but research from brain imaging and genetics has also shown that half of the variance in violence can be chalked up to biological factors. We’re trying to find benign biological interventions that society will accept, and transcranial direct-current stimulation is minimal risk. This isn’t a frontal lobotomy. In fact, we’re saying the opposite, that the front part of the brain needs to be better connected to the rest of the brain.”

Italian neurosurgeon Sergio Canavero penned a controversial essay in 2014 for the journal Frontiers in Human Neuroscience arguing that non-invasive neurostimulation should be experimentally applied to criminal psychopaths and repeat offenders despite any legal or ethical dilemmas. Canavero’s argues, “it is imperative to “switch” [a criminal’s] right/wrong circuitry to a socially non-disruptive mode.”

The quite dramatic proposal is to “remodel” a criminal’s “aberrant circuits” via either a series of intermittent brain stimulation treatments or, more startlingly, through some kind of implanted intercranial electrode system than can both, electrically modulate key areas of the brain, and remotely monitor behaviorally inappropriate neurological activity.

This isn’t the first time Canavero has suggested extraordinary medical experiments. You might remember his name from his ongoing work to be the first surgeon to perform a human head transplant.

[…]

“This is not the magic bullet that’s going to wipe away aggression and crime,” says Raine. “But could transcranial direct-current stimulation be offered as an intervention technique for first-time offenders to reduce their likelihood of recommitting a violent act?”

The key question of consent is one that many researchers aren’t really grappling with. Of course, there’s no chance convicted criminals would ever be forced to undergo this kind of procedure in a future where neuromodulation is integrated into our legal system. And behavioral alterations through electrical brain stimulation would never be forced upon people who don’t comply to social norms – right?

This is the infinitely compelling brave new world of neuroscience.

Source: Could electrically stimulating criminals’ brains prevent crime?

How programmers addict you to social media, games and your mobile phone

If you look at the current climate, the largest companies are the ones that hook you into their channel, whether it is a game, a website, shopping or social media. Quite a lot of research has been done in to how much time we spend watching TV and looking at our mobiles, showing differing numbers, all of which are surprisingly high. The New York Post says Americans check their phones 80 times per day, The Daily Mail says 110 times, Inc has a study from Qualtrics and Accel with 150 times and Business Insider has people touching their phones 2617 times per day.

This is nurtured behaviour and there is quite a bit of study on how they do this exactly

Social Networking Sites and Addiction: Ten Lessons Learned (academic paper)
Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided.

Hooked: How to Build Habit-Forming Products (Book)
Why do some products capture widespread attention while others flop? What makes us engage with certain products out of sheer habit? Is there a pattern underlying how technologies hook us?

Nir Eyal answers these questions (and many more) by explaining the Hook Model—a four-step process embedded into the products of many successful companies to subtly encourage customer behavior. Through consecutive “hook cycles,” these products reach their ultimate goal of bringing users back again and again without depending on costly advertising or aggressive messaging.

7 Ways Facebook Keeps You Addicted (and how to apply the lessons to your products) (article)

One of the key reasons for why it is so addictive is “operant conditioning”. It is based upon the scientific principle of variable rewards, discovered by B. F. Skinner (an early exponent of the school of behaviourism) in the 1930’s when performing experiments with rats.

The secret?

Not rewarding all actions but only randomly.

Most of our emails are boring business emails and occasionally we find an enticing email that keeps us coming back for more. That’s variable reward.

That’s one way Facebook creates addiction

The Secret Ways Social Media Is Built for Addiction

On February 9, 2009, Facebook introduced the Like button. Initially, the button was an innocent thing. It had nothing to do with hijacking the social reward systems of a user’s brain.

“The main intention I had was to make positivity the path of least resistance,” explains Justin Rosenstein, one of the four Facebook designers behind the button. “And I think it succeeded in its goals, but it also created large unintended negative side effects. In a way, it was too successful.”

Today, most of us reach for Snapchat, Instagram, Facebook, or Twitter with one vague hope in mind: maybe someone liked my stuff. And it’s this craving for validation, experienced by billions around the globe, that’s currently pushing platform engagement in ways that in 2009 were unimaginable. But more than that, it’s driving profits to levels that were previously impossible.

“The attention economy” is a relatively new term. It describes the supply and demand of a person’s attention, which is the commodity traded on the internet. The business model is simple: the more attention a platform can pull, the more effective its advertising space becomes, allowing it to charge advertisers more.

Behavioral Game Design (article)

Every computer game is designed around the same central element: the player. While the hardware and software for games may change, the psychology underlying how players learn and react to the game is a constant. The study of the mind has actually come up with quite a few findings that can inform game design, but most of these have been published in scientific journals and other esoteric formats inaccessible to designers. Ironically, many of these discoveries used simple computer games as tools to explore how people learn and act under different conditions.

The techniques that I’ll discuss in this article generally fall under the heading of behavioral psychology. Best known for the work done on animals in the field, behavioral psychology focuses on experiments and observable actions. One hallmark of behavioral research is that most of the major experimental discoveries are species-independent and can be found in anything from birds to fish to humans. What behavioral psychologists look for (and what will be our focus here) are general “rules” for learning and for how minds respond to their environment. Because of the species- and context-free nature of these rules, they can easily be applied to novel domains such as computer game design. Unlike game theory, which stresses how a player should react to a situation, this article will focus on how they really do react to certain stereotypical conditions.

What is being offered here is not a blueprint for perfect games, it is a primer to some of the basic ways people react to different patterns of rewards. Every computer game is implicitly asking its players to react in certain ways. Psychology can offer a framework and a vocabulary for understanding what we are already telling our players.

5 Creepy Ways Video Games Are Trying to Get You Addicted (article)

The Slot Machine in Your Pocket (brilliant article!)

When we get sucked into our smartphones or distracted, we think it’s just an accident and our responsibility. But it’s not. It’s also because smartphones and apps hijack our innate psychological biases and vulnerabilities.

I learned about our minds’ vulnerabilities when I was a magician. Magicians start by looking for blind spots, vulnerabilities and biases of people’s minds, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano. And this is exactly what technology does to your mind. App designers play your psychological vulnerabilities in the race to grab your attention.

I want to show you how they do it, and offer hope that we have an opportunity to demand a different future from technology companies.

If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.

There is also a backlash to this movement.

How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

Humantech.com

Technology is hijacking our minds and society.

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Since 2013, we’ve raised awareness of the problem within tech companies and for millions of people through broad media attention, convened top industry executives, and advised political leaders. Building on this start, we are advancing thoughtful solutions to change the system.

Why is this problem so urgent?

Technology that tears apart our common reality and truth, constantly shreds our attention, or causes us to feel isolated makes it impossible to solve the world’s other pressing problems like climate change, poverty, and polarization.

No one wants technology like that. Which means we’re all actually on the same team: Team Humanity, to realign technology with humanity’s best interests.

What is Time Well Spent (Part I): Design Distinctions

With Time Well Spent, we want technology that cares about helping us spend our time, and our lives, well – not seducing us into the most screen time, always-on interruptions or distractions.

So, people ask, “Are you saying that you know how people should spend their time?” Of course not. Let’s first establish what Time Well Spent isn’t:

It is not a universal, normative view of how people should spend their time
It is not saying that screen time is bad, or that we should turn it all off.
It is not saying that specific categories of apps (like social media or games) are bad.

Stanford brainiacs say they can predict Reddit raids

A study [PDF] based on observations from 36,000 subreddit communities has found that online dust-ups can be predicted, and the people most likely to cause them can be identified.

“Our analysis revealed a number of important trends related to conflict on Reddit, with general implications for intercommunity conflict on the web.”

Among the takeaways were that a small group of bad actors are indeed stirring up most of the conflict; around 75 per cent of the raids were triggered by 1 per cent of users.

The study also noted that ignoring the trolls doesn’t always work – conflicts grow worse when users stay within ‘echo chambers’ on their own threads, and long-term traffic losses were lessened when the ‘defending’ users directly confronted the forum intruders rather than keep to themselves.

Perhaps the most important takeaway, however, was that forum conflicts could actually be predicted. The Stanford group say they developed an long short-term memory (LSTM) deep-learning formula that, when trained on the set of Reddit posts and user information gathered over the 40 month period, was able to reliably flag when a conflict or raid was likely to flare up on a subreddit.

Now, the Stanford group says it would like to extend the research to other platforms (such as Facebook and Twitter) and look at areas not addressed in the first report, including forums that restrict negative content.

Source: Stanford brainiacs say they can predict Reddit raids • The Register

If you’re so smart, why aren’t you rich? Turns out it’s just chance.

The most successful people are not the most talented, just the luckiest, a new computer model of wealth creation confirms. Taking that into account can maximize return on many kinds of investment.
[…]
The distribution of wealth follows a well-known pattern sometimes called an 80:20 rule: 80 percent of the wealth is owned by 20 percent of the people. Indeed, a report last year concluded that just eight men had a total wealth equivalent to that of the world’s poorest 3.8 billion people.
[…]
while wealth distribution follows a power law, the distribution of human skills generally follows a normal distribution that is symmetric about an average value. For example, intelligence, as measured by IQ tests, follows this pattern. Average IQ is 100, but nobody has an IQ of 1,000 or 10,000.

The same is true of effort, as measured by hours worked. Some people work more hours than average and some work less, but nobody works a billion times more hours than anybody else.

And yet when it comes to the rewards for this work, some people do have billions of times more wealth than other people. What’s more, numerous studies have shown that the wealthiest people are generally not the most talented by other measures.
[…]
Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.

The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest.
[…]
Pluchino and co’s model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else.
[…]
The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.

However, they also experience unlucky events that reduce their wealth. These events occur at random.

At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.

When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.

That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.

So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.

The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”
[…]
They use their model to explore different kinds of funding models to see which produce the best returns when luck is taken into account.

The team studied three models, in which research funding is distributed equally to all scientists; distributed randomly to a subset of scientists; or given preferentially to those who have been most successful in the past. Which of these is the best strategy?

The strategy that delivers the best returns, it turns out, is to divide the funding equally among all researchers. And the second- and third-best strategies involve distributing it at random to 10 or 20 percent of scientists.

In these cases, the researchers are best able to take advantage of the serendipitous discoveries they make from time to time. In hindsight, it is obvious that the fact a scientist has made an important chance discovery in the past does not mean he or she is more likely to make one in the future.

A similar approach could also be applied to investment in other kinds of enterprises, such as small or large businesses, tech startups, education that increases talent, or even the creation of random lucky events.

Source: If you’re so smart, why aren’t you rich? Turns out it’s just chance.

What Is Ultra-Processed Food?

We eat a lot of ultra-processed food, and these foods tend to be sugary and not so great for us. But the problem isn’t necessarily the fact that they’re ultra-processed. This is a weird and arguably unfair way to categorize foods, so let’s take a look at what “ultra-processed” really means.

This terminology comes from a classification scheme called NOVA that splits foods into four groups:

Unprocessed or “minimally processed” foods (group 1) include fruits, vegetables, and meats. Perhaps you’ve pulled a carrot out of the ground and washed it, or killed a cow and sliced off a steak. Foods in this category can be processed in ways that don’t add extra ingredients. They can be cooked, ground, dried, or frozen.

Processed culinary ingredients (group 2) include sugar, salt, and oils. If you combine ingredients in this group, for example to make salted butter, they stay in this group.

Processed foods (group 3) are what you get when you combine groups 1 and 2. Bread, wine, and canned veggies are included. Additives are allowed if they “preserve [a food’s] original properties” like ascorbic acid added to canned fruit to keep it from browning.

Ultra-processed foods (group 4) don’t have a strict definition, but NOVA hints at some properties. They “typically” have five or more ingredients. They may be aggressively marketed and highly profitable. A food is automatically in group 4 if it includes “substances not commonly used in culinary preparations, and additives whose purpose is to imitate sensory qualities of group 1 foods or of culinary preparations of these foods, or to disguise undesirable sensory qualities of the final product.”

That last group feels a little disingenous. I’ve definitely seen things in my kitchen that are supposedly only used to make “ultra-processed” foods: food coloring, flavor extracts, artificial sweeteners, anti-caking agents (cornstarch, anyone?) and tools for extrusion and molding, to name a few.
[…]
So when we talk about ultra-processed foods, we have to remember that it’s a vague category that only loosely communicates the nutrition of its foods. Just like BMI combines muscley athletes with obese people because it makes for convenient math, NOVA categories combine things of drastically different nutritional quality.

Source: What Is Ultra-Processed Food?

Why hiring the ‘best’ people produces the least creative results

Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. And when biases creep in, it results in people who look like those making the decisions. That’s not likely to lead to breakthroughs. As Astro Teller, CEO of X, the ‘moonshoot factory’ at Alphabet, Google’s parent company, has said: ‘Having people who have different mental perspectives is what’s important. If you want to explore things you haven’t explored, having people who look just like you and think just like you is not the best way.’ We must see the forest.

Source: Why hiring the ‘best’ people produces the least creative results — Quartz

Cheddar Man: Britains’ first men were black. And so were Europes’.

New research into ancient DNA extracted from the skeleton has helped scientists to build a portrait of Cheddar Man and his life in Mesolithic Britain.The biggest surprise, perhaps, is that some of the earliest modern human inhabitants of Britain may not have looked the way you might expect.Dr Tom Booth is a postdoctoral researcher working closely with the Museum’s human remains collection to investigate human adaptation to changing environments.’Until recently it was always assumed that humans quickly adapted to have paler skin after entering Europe about 45,000 years ago,’ says Tom. ‘Pale skin is better at absorbing UV light and helps humans avoid vitamin D deficiency in climates with less sunlight.’However, Cheddar Man has the genetic markers of skin pigmentation usually associated with sub-Saharan Africa.This discovery is consistent with a number of other Mesolithic human remains discovered throughout Europe.

Source: Cheddar Man: Mesolithic Britain’s blue-eyed boy | Natural History Museum

Why People Dislike Really Smart Leaders

Intelligence makes for better leaders—from undergraduates to executives to presidents—according to multiple studies. It certainly makes sense that handling a market shift or legislative logjam requires cognitive oomph. But new research on leadership suggests that, at a certain point, having a higher IQ stops helping and starts hurting.
[…]
The researchers looked at 379 male and female business leaders in 30 countries, across fields that included banking, retail and technology. The managers took IQ tests (an imperfect but robust predictor of performance in many areas), and each was rated on leadership style and effectiveness by an average of eight co-workers. IQ positively correlated with ratings of leader effectiveness, strategy formation, vision and several other characteristics—up to a point. The ratings peaked at an IQ of around 120, which is higher than roughly 80 percent of office workers. Beyond that, the ratings declined. The researchers suggest the “ideal” IQ could be higher or lower in various fields, depending on whether technical versus social skills are more valued in a given work culture.

“It’s an interesting and thoughtful paper,” says Paul Sackett, a management professor at University of Minnesota, who was not involved in the research. “To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers,” he says. “The wrong interpretation would be, ‘Don’t hire high-IQ leaders.’ ”

Source: Why People Dislike Really Smart Leaders – Scientific American