Humans Didn’t Evolve From a Single Ancestral Population

In the 1980s, scientists learned that all humans living today are descended from a woman, dubbed “Mitochondrial Eve,” who lived in Africa between 150,000 to 200,000 years ago. This discovery, along with other evidence, suggested humans evolved from a single ancestral population—an interpretation that is not standing the test of time. The story of human evolution, as the latest research suggests, is more complicated than that.

A new commentary paper published today in Trends in Ecology & Evolution is challenging the predominant view that our species, Homo sapiens, emerged from a single ancestral population and a single geographic region in Africa. By looking at some of the latest archaeological, fossil, genetic, and environmental evidence, a team of international experts led by Eleanor Scerri from Oxford’s School of Archaeology have presented an alternative story of human evolution, one showing that our species emerged from isolated populations scattered across Africa, who occasionally came together to interbreed. Gradually, this intermingling of genetic characteristics produced our species.

Indeed, the origin of Homo sapiens isn’t as neat and tidy as we’ve been led to believe.

[…]

“The idea that humans emerged from one population and progressed in a simple linear fashion to a modern physical appearance is attractive, but unfortunately no longer a very good fit with the available information,” said Scerri. “Instead it looks very much like humans emerged within a complex set of populations that were scattered across Africa.”

The reality, as suggested by this latest research, is that human ancestors were spread across Africa, segregated by diverse habitats and shifting environmental boundaries, such as forests and deserts. These prolonged periods of isolation gave rise to a surprising variety of human forms, and a diverse array of adaptive traits. When stratified groups interbred, they preserved the best characteristics that evolution had to offer. Consequently, the authors say that terms like “archaic humans” and “anatomically modern humans” are increasingly problematic given the evidence.

Scerri said occasional episodes of interbreeding between these different, semi-isolated populations created a diverse “meta-population” of humans within Africa, from which our species emerged over a very long time. Our species, Homo sapiens, emerged around 300,000 years ago, but certain characteristics, like a round brain case, pronounced chin, and a small face, didn’t appear together in a single individual until about 100,000 years ago, and possibly not until 40,000 years ago—a long time before genetics and other archaeological evidence tells us our species was already in existence. Isolated populations came together to exchange genes and culture—two interrelated processes that shaped our species, explained Scerri.

The new paper, instead of providing new evidence, provides a comprehensive review and analysis of what the latest scientific literature is telling us about human evolution, starting around 300,000 years ago. The researchers found that human fossils from different regions of Africa all featured a diverse mix of modern and more “archaic” physical characteristics. The earliest of these date back to between 300,000 to 250,000 years ago, and originate from opposite ends of Africa, stretching from the southern tip of the continent to its northernmost points. Many of these fossils were found with sophisticated archaeological items associated with our species, including specialized tools mounted onto wooden handles and shafts, and often utilizing different bindings and glues. These artifacts, like the diverse fossils, appeared across Africa around the same time, and studies of their distribution suggest they belonged discrete groups. At the same time, genetic data points to the presence of multiple populations.

“On the methodological side, we can also see that inferences of genetic information that don’t account for subdivisions between populations can also generate very misleading information,” said Scerri.

By studying shifts in rivers, deserts, forests, and other physical barriers, the researchers were able to chronicle the geographic changes in Africa that facilitated migration, introducing opportunities for contact among groups that were previously separated. These groups, after long periods of isolation, were able to interact and interbreed, sometimes splitting off again and undergoing renewed periods of extended isolation.

[…]

Jean-Jacques Hublin, a scientist at the Max Planck Institute for Evolutionary Anthropology who wasn’t involved in the new study, said the new commentary paper is presenting what is quickly becoming the dominant view on this topic.

“There is growing evidence that the emergence of so-called ‘modern humans’ did not occur in a restricted cradle in sub-Saharan Africa and at a precise point in time,” Hublin told Gizmodo. “Rather, it involved several populations across the continent and was a fundamentally gradual process.”

Source: Humans Didn’t Evolve From a Single Ancestral Population

EU asks you to tell them if you want Daylight Savings Time

Objective of the consultation

Following a number of requests from citizens, from the European Parliament, and from certain EU Member States, the Commission has decided to investigate the functioning of the current EU summertime arrangements and to assess whether or not they should be changed.

In this context, the Commission is interested in gathering the views of European citizens, stakeholders and Member States on the current EU summertime arrangements and on any potential change to those arrangements.

How to submit your response

The online questionnaire is accessible in all official EU languages (except Irish) and replies may be submitted in any EU language. We do encourage you to answer as much as possible in English though.

You may pause at any time and continue later. Once you have submitted your answers, you can download a copy of your completed responses.

Source: Public Consultation on summertime arrangements | European Commission

Versius Robot allows keyhole surgery to be performed with 1/2 hour training instead of 80 sessions

It is the most exacting of surgical skills: tying a knot deep inside a patient’s abdomen, pivoting long graspers through keyhole incisions with no direct view of the thread.

Trainee surgeons typically require 60 to 80 hours of practice, but in a mock-up operating theatre outside Cambridge, a non-medic with just a few hours of experience is expertly wielding a hook-shaped needle – in this case stitching a square of pink sponge rather than an artery or appendix.

The feat is performed with the assistance of Versius, the world’s smallest surgical robot, which could be used in NHS operating theatres for the first time later this year if approved for clinical use. Versius is one of a handful of advanced surgical robots that are predicted to transform the way operations are performed by allowing tens or hundreds of thousands more surgeries each year to be carried out as keyhole procedures.

[…]

The Versius robot cuts down the time required to learn to tie a surgical knot from more than 100 training sessions, when using traditional manual tools, to just half an hour, according to Slack.

[…]

Versius comprises three robotic limbs – each slightly larger than a human arm, complete with shoulder, elbow and wrist joints – mounted on bar-stool sized mobile units.

Controlled by a surgeon at a console, the limbs rise, fall and swivel silently and smoothly. The robot is designed to carry out a wide range of keyhole procedures, including hysterectomies, prostate removal, ear, nose and throat surgery, and hernia repair. CMR claims the costs of using the robot will not be significantly higher than for a conventional keyhole procedure.

Source: The robots helping NHS surgeons perform better, faster – and for longer | Society | The Guardian

Open plan offices flop – you talk less, IM more, if forced to flee a cubicle

Open plan offices don’t deliver their promised benefits of more face-to-face collaboration and instead make us misanthropic recluses and more likely to use electronic communications tools.

So says a new article in the Philosophical Transactions of the Royal Society B, by Harvard academics Ethan S. Bernstein, Stephen Turban. The pair studied two Fortune 500 companies that adopted open office designs and wrote up the results as “The impact of the ‘open’ workspace on human collaboration”.

[…]

Analysis of the data revealed that “volume of face-to-face interaction decreased significantly (approx. 70%) in both cases, with an associated increase in electronic interaction.”

“In short, rather than prompting increasingly vibrant face-to-face collaboration, open architecture appeared to trigger a natural human response to socially withdraw from officemates and interact instead over email and IM.”

In the first workplace studied, “IM message activity increased by 67% (99 more messages) and words sent by IM increased by 75% (850 more words). Thus — to restate more precisely — in boundaryless space, electronic interaction replaced F2F interaction.”

The second workplace produced similar results.

The authors reach three conclusions, the first of which is that open offices “can dampen F2F interaction, as employees find other strategies to preserve their privacy; for example, by choosing a different channel through which to communicate.”

Source: Open plan offices flop – you talk less, IM more, if forced to flee a cubicle • The Register

More on how social media hacks brains to addict users

In a followup to How programmers addict you to social media, games and your mobile phone

Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’

He explained that when Facebook was being developed the objective was: “How do we consume as much of your time and conscious attention as possible?” It was this mindset that led to the creation of features such as the “like” button that would give users “a little dopamine hit” to encourage them to upload more content.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

[…]

Parker is not the only Silicon Valley entrepreneur to express regret over the technologies he helped to develop. The former Googler Tristan Harris is one of several techies interviewed by the Guardian in October to criticize the industry.

“All of us are jacked into this system,” he said. “All of our minds can be hijacked. Our choices are not as free as we think they are.”

Aza Raskin on Google Search Results and How He Invented the Infinite Scroll

Social media apps are ‘deliberately’ addictive to users

Social media companies are deliberately addicting users to their products for financial gain, Silicon Valley insiders have told the BBC’s Panorama programme.

“It’s as if they’re taking behavioural cocaine and just sprinkling it all over your interface and that’s the thing that keeps you like coming back and back and back”, said former Mozilla and Jawbone employee Aza Raskin.

“Behind every screen on your phone, there are generally like literally a thousand engineers that have worked on this thing to try to make it maximally addicting” he added.

In 2006 Mr Raskin, a leading technology engineer himself, designed infinite scroll, one of the features of many apps that is now seen as highly habit forming. At the time, he was working for Humanized – a computer user-interface consultancy.

Image caption Aza Raskin says he did not recognise how addictive infinite scroll could be

Infinite scroll allows users to endlessly swipe down through content without clicking.

“If you don’t give your brain time to catch up with your impulses,” Mr Raskin said, “you just keep scrolling.”

He said the innovation kept users looking at their phones far longer than necessary.

Mr Raskin said he had not set out to addict people and now felt guilty about it.

But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them.

“In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up,” he said.

“So, when you put that much pressure on that one number, you’re going to start trying to invent new ways of getting people to stay hooked.”

Could electrically stimulating criminals’ brains prevent crime?

A new study by a team of international researchers from the University of Pennsylvania and Nanyang Technological University suggests that electrically stimulating the prefrontal cortex can reduce the desire to carry out violent antisocial acts by over 50 percent. The research, while undeniably compelling, raises a whole host of confronting ethical questions, not just over the feasibility of actually bringing this technology into our legal system, but whether we should?

The intriguing experiment took 81 healthy adults and split them into two groups. One group received transcranial direct-current stimulation (tDCS) on the dorsolateral prefrontal cortex for 20 minutes, while the other placebo group received just 30 seconds of current and then nothing for the remaining 19 minutes.

Following the electrical stimulation all the participants were presented with two vignettes and asked to rate, from 0 to 10, how likely they would be to behave as the protagonist in the stories. One hypothetical scenario outlined a physical assault, while the other was about sexual assault. The results were fascinating, with participants receiving the tDCS reporting they would be between 47 and 70 percent less likely to carry out the violent acts compared to the blind placebo control.

“We chose our approach and behavioral tasks specifically based on our hypotheses about which brain areas might be relevant to generating aggressive intentions,” says Roy Hamilton, senior author on the study. “We were pleased to see at least some of our major predictions borne out.”

[…]

Transcranial direct-current stimulation is a little different to electroshock therapy or, more accurately, electroconvulsive therapy (ECT). Classical ECT involves significant electrical stimulation to the brain at thresholds intended to induce seizures. It is also not especially targeted, shooting electrical currents across the whole brain.

On the other hand, tDCS is much more subtle, delivering a continual low direct current to specific areas of the brain via electrodes on the head. The level of electrical current administered in tDCS sessions is often imperceptible to a subject and occasionally results in no more than a mild skin irritation.

[…]

Despite TMS being the more commonly used approach for neuromodulation in current clinical practice, perhaps tDCS is a more pragmatic and implementable form of the technology. Unlike TMS, tDCS is cheaper and easier to administer, it can often be simply engaged from home, and presents as a process that would be much more straightforward to integrate into widespread use.

Of course, the reality of what is being implied here is a lot more complicated than simply finding the most appropriate technology. Roy Hamilton quite rightly notes in relation to his new study that, “The ability to manipulate such complex and fundamental aspects of cognition and behavior from outside the body has tremendous social, ethical, and possibly someday legal implications.”

[…]

Of course, while the burgeoning field of neurolaw is grappling with what this research means for legal ideas of individual responsibility, this new study raises a whole host of complicated ethical and social questions. If a short, and non-invasive, series of targeted tDCS sessions could reduce recidivism, then should we consider using it in prisons?

“Much of the focus in understanding causes of crime has been on social causation,” says psychologist Adrian Raine, co-author on the new study. “That’s important, but research from brain imaging and genetics has also shown that half of the variance in violence can be chalked up to biological factors. We’re trying to find benign biological interventions that society will accept, and transcranial direct-current stimulation is minimal risk. This isn’t a frontal lobotomy. In fact, we’re saying the opposite, that the front part of the brain needs to be better connected to the rest of the brain.”

Italian neurosurgeon Sergio Canavero penned a controversial essay in 2014 for the journal Frontiers in Human Neuroscience arguing that non-invasive neurostimulation should be experimentally applied to criminal psychopaths and repeat offenders despite any legal or ethical dilemmas. Canavero’s argues, “it is imperative to “switch” [a criminal’s] right/wrong circuitry to a socially non-disruptive mode.”

The quite dramatic proposal is to “remodel” a criminal’s “aberrant circuits” via either a series of intermittent brain stimulation treatments or, more startlingly, through some kind of implanted intercranial electrode system than can both, electrically modulate key areas of the brain, and remotely monitor behaviorally inappropriate neurological activity.

This isn’t the first time Canavero has suggested extraordinary medical experiments. You might remember his name from his ongoing work to be the first surgeon to perform a human head transplant.

[…]

“This is not the magic bullet that’s going to wipe away aggression and crime,” says Raine. “But could transcranial direct-current stimulation be offered as an intervention technique for first-time offenders to reduce their likelihood of recommitting a violent act?”

The key question of consent is one that many researchers aren’t really grappling with. Of course, there’s no chance convicted criminals would ever be forced to undergo this kind of procedure in a future where neuromodulation is integrated into our legal system. And behavioral alterations through electrical brain stimulation would never be forced upon people who don’t comply to social norms – right?

This is the infinitely compelling brave new world of neuroscience.

Source: Could electrically stimulating criminals’ brains prevent crime?

How programmers addict you to social media, games and your mobile phone

If you look at the current climate, the largest companies are the ones that hook you into their channel, whether it is a game, a website, shopping or social media. Quite a lot of research has been done in to how much time we spend watching TV and looking at our mobiles, showing differing numbers, all of which are surprisingly high. The New York Post says Americans check their phones 80 times per day, The Daily Mail says 110 times, Inc has a study from Qualtrics and Accel with 150 times and Business Insider has people touching their phones 2617 times per day.

This is nurtured behaviour and there is quite a bit of study on how they do this exactly

Social Networking Sites and Addiction: Ten Lessons Learned (academic paper)
Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided.

Hooked: How to Build Habit-Forming Products (Book)
Why do some products capture widespread attention while others flop? What makes us engage with certain products out of sheer habit? Is there a pattern underlying how technologies hook us?

Nir Eyal answers these questions (and many more) by explaining the Hook Model—a four-step process embedded into the products of many successful companies to subtly encourage customer behavior. Through consecutive “hook cycles,” these products reach their ultimate goal of bringing users back again and again without depending on costly advertising or aggressive messaging.

7 Ways Facebook Keeps You Addicted (and how to apply the lessons to your products) (article)

One of the key reasons for why it is so addictive is “operant conditioning”. It is based upon the scientific principle of variable rewards, discovered by B. F. Skinner (an early exponent of the school of behaviourism) in the 1930’s when performing experiments with rats.

The secret?

Not rewarding all actions but only randomly.

Most of our emails are boring business emails and occasionally we find an enticing email that keeps us coming back for more. That’s variable reward.

That’s one way Facebook creates addiction

The Secret Ways Social Media Is Built for Addiction

On February 9, 2009, Facebook introduced the Like button. Initially, the button was an innocent thing. It had nothing to do with hijacking the social reward systems of a user’s brain.

“The main intention I had was to make positivity the path of least resistance,” explains Justin Rosenstein, one of the four Facebook designers behind the button. “And I think it succeeded in its goals, but it also created large unintended negative side effects. In a way, it was too successful.”

Today, most of us reach for Snapchat, Instagram, Facebook, or Twitter with one vague hope in mind: maybe someone liked my stuff. And it’s this craving for validation, experienced by billions around the globe, that’s currently pushing platform engagement in ways that in 2009 were unimaginable. But more than that, it’s driving profits to levels that were previously impossible.

“The attention economy” is a relatively new term. It describes the supply and demand of a person’s attention, which is the commodity traded on the internet. The business model is simple: the more attention a platform can pull, the more effective its advertising space becomes, allowing it to charge advertisers more.

Behavioral Game Design (article)

Every computer game is designed around the same central element: the player. While the hardware and software for games may change, the psychology underlying how players learn and react to the game is a constant. The study of the mind has actually come up with quite a few findings that can inform game design, but most of these have been published in scientific journals and other esoteric formats inaccessible to designers. Ironically, many of these discoveries used simple computer games as tools to explore how people learn and act under different conditions.

The techniques that I’ll discuss in this article generally fall under the heading of behavioral psychology. Best known for the work done on animals in the field, behavioral psychology focuses on experiments and observable actions. One hallmark of behavioral research is that most of the major experimental discoveries are species-independent and can be found in anything from birds to fish to humans. What behavioral psychologists look for (and what will be our focus here) are general “rules” for learning and for how minds respond to their environment. Because of the species- and context-free nature of these rules, they can easily be applied to novel domains such as computer game design. Unlike game theory, which stresses how a player should react to a situation, this article will focus on how they really do react to certain stereotypical conditions.

What is being offered here is not a blueprint for perfect games, it is a primer to some of the basic ways people react to different patterns of rewards. Every computer game is implicitly asking its players to react in certain ways. Psychology can offer a framework and a vocabulary for understanding what we are already telling our players.

5 Creepy Ways Video Games Are Trying to Get You Addicted (article)

The Slot Machine in Your Pocket (brilliant article!)

When we get sucked into our smartphones or distracted, we think it’s just an accident and our responsibility. But it’s not. It’s also because smartphones and apps hijack our innate psychological biases and vulnerabilities.

I learned about our minds’ vulnerabilities when I was a magician. Magicians start by looking for blind spots, vulnerabilities and biases of people’s minds, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano. And this is exactly what technology does to your mind. App designers play your psychological vulnerabilities in the race to grab your attention.

I want to show you how they do it, and offer hope that we have an opportunity to demand a different future from technology companies.

If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.

There is also a backlash to this movement.

How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

Humantech.com

Technology is hijacking our minds and society.

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Since 2013, we’ve raised awareness of the problem within tech companies and for millions of people through broad media attention, convened top industry executives, and advised political leaders. Building on this start, we are advancing thoughtful solutions to change the system.

Why is this problem so urgent?

Technology that tears apart our common reality and truth, constantly shreds our attention, or causes us to feel isolated makes it impossible to solve the world’s other pressing problems like climate change, poverty, and polarization.

No one wants technology like that. Which means we’re all actually on the same team: Team Humanity, to realign technology with humanity’s best interests.

What is Time Well Spent (Part I): Design Distinctions

With Time Well Spent, we want technology that cares about helping us spend our time, and our lives, well – not seducing us into the most screen time, always-on interruptions or distractions.

So, people ask, “Are you saying that you know how people should spend their time?” Of course not. Let’s first establish what Time Well Spent isn’t:

It is not a universal, normative view of how people should spend their time
It is not saying that screen time is bad, or that we should turn it all off.
It is not saying that specific categories of apps (like social media or games) are bad.

Stanford brainiacs say they can predict Reddit raids

A study [PDF] based on observations from 36,000 subreddit communities has found that online dust-ups can be predicted, and the people most likely to cause them can be identified.

“Our analysis revealed a number of important trends related to conflict on Reddit, with general implications for intercommunity conflict on the web.”

Among the takeaways were that a small group of bad actors are indeed stirring up most of the conflict; around 75 per cent of the raids were triggered by 1 per cent of users.

The study also noted that ignoring the trolls doesn’t always work – conflicts grow worse when users stay within ‘echo chambers’ on their own threads, and long-term traffic losses were lessened when the ‘defending’ users directly confronted the forum intruders rather than keep to themselves.

Perhaps the most important takeaway, however, was that forum conflicts could actually be predicted. The Stanford group say they developed an long short-term memory (LSTM) deep-learning formula that, when trained on the set of Reddit posts and user information gathered over the 40 month period, was able to reliably flag when a conflict or raid was likely to flare up on a subreddit.

Now, the Stanford group says it would like to extend the research to other platforms (such as Facebook and Twitter) and look at areas not addressed in the first report, including forums that restrict negative content.

Source: Stanford brainiacs say they can predict Reddit raids • The Register

If you’re so smart, why aren’t you rich? Turns out it’s just chance.

The most successful people are not the most talented, just the luckiest, a new computer model of wealth creation confirms. Taking that into account can maximize return on many kinds of investment.
[…]
The distribution of wealth follows a well-known pattern sometimes called an 80:20 rule: 80 percent of the wealth is owned by 20 percent of the people. Indeed, a report last year concluded that just eight men had a total wealth equivalent to that of the world’s poorest 3.8 billion people.
[…]
while wealth distribution follows a power law, the distribution of human skills generally follows a normal distribution that is symmetric about an average value. For example, intelligence, as measured by IQ tests, follows this pattern. Average IQ is 100, but nobody has an IQ of 1,000 or 10,000.

The same is true of effort, as measured by hours worked. Some people work more hours than average and some work less, but nobody works a billion times more hours than anybody else.

And yet when it comes to the rewards for this work, some people do have billions of times more wealth than other people. What’s more, numerous studies have shown that the wealthiest people are generally not the most talented by other measures.
[…]
Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.

The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest.
[…]
Pluchino and co’s model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else.
[…]
The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.

However, they also experience unlucky events that reduce their wealth. These events occur at random.

At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.

When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.

That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.

So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.

The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”
[…]
They use their model to explore different kinds of funding models to see which produce the best returns when luck is taken into account.

The team studied three models, in which research funding is distributed equally to all scientists; distributed randomly to a subset of scientists; or given preferentially to those who have been most successful in the past. Which of these is the best strategy?

The strategy that delivers the best returns, it turns out, is to divide the funding equally among all researchers. And the second- and third-best strategies involve distributing it at random to 10 or 20 percent of scientists.

In these cases, the researchers are best able to take advantage of the serendipitous discoveries they make from time to time. In hindsight, it is obvious that the fact a scientist has made an important chance discovery in the past does not mean he or she is more likely to make one in the future.

A similar approach could also be applied to investment in other kinds of enterprises, such as small or large businesses, tech startups, education that increases talent, or even the creation of random lucky events.

Source: If you’re so smart, why aren’t you rich? Turns out it’s just chance.

What Is Ultra-Processed Food?

We eat a lot of ultra-processed food, and these foods tend to be sugary and not so great for us. But the problem isn’t necessarily the fact that they’re ultra-processed. This is a weird and arguably unfair way to categorize foods, so let’s take a look at what “ultra-processed” really means.

This terminology comes from a classification scheme called NOVA that splits foods into four groups:

Unprocessed or “minimally processed” foods (group 1) include fruits, vegetables, and meats. Perhaps you’ve pulled a carrot out of the ground and washed it, or killed a cow and sliced off a steak. Foods in this category can be processed in ways that don’t add extra ingredients. They can be cooked, ground, dried, or frozen.

Processed culinary ingredients (group 2) include sugar, salt, and oils. If you combine ingredients in this group, for example to make salted butter, they stay in this group.

Processed foods (group 3) are what you get when you combine groups 1 and 2. Bread, wine, and canned veggies are included. Additives are allowed if they “preserve [a food’s] original properties” like ascorbic acid added to canned fruit to keep it from browning.

Ultra-processed foods (group 4) don’t have a strict definition, but NOVA hints at some properties. They “typically” have five or more ingredients. They may be aggressively marketed and highly profitable. A food is automatically in group 4 if it includes “substances not commonly used in culinary preparations, and additives whose purpose is to imitate sensory qualities of group 1 foods or of culinary preparations of these foods, or to disguise undesirable sensory qualities of the final product.”

That last group feels a little disingenous. I’ve definitely seen things in my kitchen that are supposedly only used to make “ultra-processed” foods: food coloring, flavor extracts, artificial sweeteners, anti-caking agents (cornstarch, anyone?) and tools for extrusion and molding, to name a few.
[…]
So when we talk about ultra-processed foods, we have to remember that it’s a vague category that only loosely communicates the nutrition of its foods. Just like BMI combines muscley athletes with obese people because it makes for convenient math, NOVA categories combine things of drastically different nutritional quality.

Source: What Is Ultra-Processed Food?

Why hiring the ‘best’ people produces the least creative results

Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. And when biases creep in, it results in people who look like those making the decisions. That’s not likely to lead to breakthroughs. As Astro Teller, CEO of X, the ‘moonshoot factory’ at Alphabet, Google’s parent company, has said: ‘Having people who have different mental perspectives is what’s important. If you want to explore things you haven’t explored, having people who look just like you and think just like you is not the best way.’ We must see the forest.

Source: Why hiring the ‘best’ people produces the least creative results — Quartz

Cheddar Man: Britains’ first men were black. And so were Europes’.

New research into ancient DNA extracted from the skeleton has helped scientists to build a portrait of Cheddar Man and his life in Mesolithic Britain.The biggest surprise, perhaps, is that some of the earliest modern human inhabitants of Britain may not have looked the way you might expect.Dr Tom Booth is a postdoctoral researcher working closely with the Museum’s human remains collection to investigate human adaptation to changing environments.’Until recently it was always assumed that humans quickly adapted to have paler skin after entering Europe about 45,000 years ago,’ says Tom. ‘Pale skin is better at absorbing UV light and helps humans avoid vitamin D deficiency in climates with less sunlight.’However, Cheddar Man has the genetic markers of skin pigmentation usually associated with sub-Saharan Africa.This discovery is consistent with a number of other Mesolithic human remains discovered throughout Europe.

Source: Cheddar Man: Mesolithic Britain’s blue-eyed boy | Natural History Museum

Why People Dislike Really Smart Leaders

Intelligence makes for better leaders—from undergraduates to executives to presidents—according to multiple studies. It certainly makes sense that handling a market shift or legislative logjam requires cognitive oomph. But new research on leadership suggests that, at a certain point, having a higher IQ stops helping and starts hurting.
[…]
The researchers looked at 379 male and female business leaders in 30 countries, across fields that included banking, retail and technology. The managers took IQ tests (an imperfect but robust predictor of performance in many areas), and each was rated on leadership style and effectiveness by an average of eight co-workers. IQ positively correlated with ratings of leader effectiveness, strategy formation, vision and several other characteristics—up to a point. The ratings peaked at an IQ of around 120, which is higher than roughly 80 percent of office workers. Beyond that, the ratings declined. The researchers suggest the “ideal” IQ could be higher or lower in various fields, depending on whether technical versus social skills are more valued in a given work culture.

“It’s an interesting and thoughtful paper,” says Paul Sackett, a management professor at University of Minnesota, who was not involved in the research. “To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers,” he says. “The wrong interpretation would be, ‘Don’t hire high-IQ leaders.’ ”

Source: Why People Dislike Really Smart Leaders – Scientific American

Computer program that tries to determine if you reoffend is racist, wrong and been in use since 2000.

One widely used criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS; Northpointe, which rebranded itself to “equivant” in January 2017), has been used to assess more than 1 million offenders since it was developed in 1998. The recidivism prediction component of COMPAS—the recidivism risk scale—has been in use since 2000. This software predicts a defendant’s risk of committing a misdemeanor or felony within 2 years of assessment from 137 features about an individual and the individual’s past criminal record.

Although the data used by COMPAS do not include an individual’s race, other aspects of the data may be correlated to race that can lead to racial disparities in the predictions. In May 2016, writing for ProPublica, Angwin et al. (2) analyzed the efficacy of COMPAS on more than 7000 individuals arrested in Broward County, Florida between 2013 and 2014. This analysis indicated that the predictions were unreliable and racially biased. COMPAS’s overall accuracy for white defendants is 67.0%, only slightly higher than its accuracy of 63.8% for black defendants. The mistakes made by COMPAS, however, affected black and white defendants differently: Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their black counterparts at 28.0%. In other words, COMPAS scores appeared to favor white defendants over black defendants by underpredicting recidivism for white and overpredicting recidivism for black defendants.
[…]
We have shown that commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey.
[…]
Although Northpointe does not reveal the details of their COMPAS software, we have shown that their prediction algorithm is equivalent to a simple linear classifier. In addition, despite the impressive sounding use of 137 features, it would appear that a linear classifier based on only 2 features—age and total number of previous convictions—is all that is required to yield the same prediction accuracy as COMPAS.

The question of accurate prediction of recidivism is not limited to COMPAS. A review of nine different algorithmic approaches to predicting recidivism found that eight of the nine approaches failed to make accurate predictions (including COMPAS) (13). In addition, a meta-analysis of nine algorithmic approaches found only moderate levels of predictive accuracy across all approaches and concluded that these techniques should not be solely used for criminal justice decision-making, particularly in decisions of preventative detention
[…]
When considering using software such as COMPAS in making decisions that will significantly affect the lives and well-being of criminal defendants, it is valuable to ask whether we would put these decisions in the hands of random people who respond to an online survey because, in the end, the results from these two approaches appear to be indistinguishable.

Source: The accuracy, fairness, and limits of predicting recidivism | Science Advances

Older Adults’ Forgetfulness Tied To non syncing Brain Rhythms In Sleep

During deep sleep, older people have less coordination between two brain waves that are important to saving new memories, a team reports in the journal Neuron.

To find out, Walker and a team of scientists had 20 young adults learn 120 pairs of words. “Then we put electrodes on their head and we had them sleep,” he says.

The electrodes let researchers monitor the electrical waves produced by the brain during deep sleep. They focused on the interaction between slow waves, which occur every second or so, and faster waves called sleep spindles, which occur more than 12 times a second.

The next morning the volunteers took a test to see how many word pairs they could still remember. And it turned out their performance was determined by how well their slow waves and spindles had synchronized during deep sleep.

Next, the team repeated the experiment with 32 people in their 60s and 70s. Their brain waves were less synchronized during deep sleep. They also remembered fewer word pairs the next morning.

And, just like with young people, performance on the memory test was determined by how well their brain waves kept the beat, says Randolph Helfrich, an author of the new study and a postdoctoral fellow at UC Berkeley.

The team also found a likely reason for the lack of coordination associated with aging: atrophy of an area of the brain involved in producing deep sleep. People with more atrophy had less rhythm in the brain, Walker says.

But the study also suggests that it’s possible to improve an impaired memory by re-synchronizing brain rhythms during sleep.

One way to do this would be by applying electrical or magnetic pulses through the scalp. “The idea is to boost those brain waves and bring them back together,” Helfrich says.

Walker already has plans to test this approach to synchronizing brain waves.

Source: Older Adults’ Forgetfulness Tied To Faulty Brain Rhythms In Sleep : Shots – Health News : NPR

Empirical evidence on how to interrogate: build rapport, not conflict

The Alisons, husband and wife, have done something no scholars of interrogation have been able to do before. Working in close cooperation with the police, who allowed them access to more than 1,000 hours of tapes, they have observed and analysed hundreds of real-world interviews with terrorists suspected of serious crimes. No researcher in the world has ever laid hands on such a haul of data before. Based on this research, they have constructed the world’s first empirically grounded and comprehensive model of interrogation tactics.

The Alisons’ findings are changing the way law enforcement and security agencies approach the delicate and vital task of gathering human intelligence. “I get very little, if any, pushback from practitioners when I present the Alisons’ work,” said Kleinman, who now teaches interrogation tactics to military and police officers. “Even those who don’t have a clue about the scientific method, it just resonates with them.” The Alisons have done more than strengthen the hand of advocates of non-coercive interviewing: they have provided an unprecedentedly authoritative account of what works and what does not, rooted in a profound understanding of human relations. That they have been able to do so is testament to a joint preoccupation with police interviews that stretches back more than 20 years.
[…]
Each interview had to be minutely analysed according to an intricate taxonomy of interrogation behaviours, developed by the Alisons. Every aspect of the interaction between interviewee and interviewer (or interviewers – sometimes there are two) was classified and scored. They included the counter-interrogation tactics employed by the suspects (complete silence? humming?), the manner in which the interviewer asked questions (confrontational? authoritative? passive?), the demeanour of the interviewee (dominating? disengaged?), and the amount and quality of information yielded. Data was gathered on 150 different variables in all.
[…]
Despite its reputation among elite practitioners, “rapport” has been vaguely defined and poorly understood. It is often conflated with simply being nice – Laurence Alison refers to this, derisively, as the “cappuccinos and hugs” theory. In fact, he observes, interviewers can fail because they are too nice, acquiescing too quickly to the demands of a suspect, or neglecting to pursue a line of purposeful questioning at a vital moment.

The best interviewers are versatile: they know when to be sympathetic, when to be direct and forthright. What they rarely do is impose their will on the interviewee, either overtly, through aggression, or covertly, through the use of “tricks” – techniques of unconscious manipulation, which make the interviewer feel smart but are often seen through by interviewees. Above all, rapport, in the sense used by the Alisons, describes an authentic human connection. “You’ve got to mean it,” is one of Laurence’s refrains.

Source: The scientists persuading terrorists to spill their secrets | News | The Guardian

Paltering: lying by using the truth

There are three types of lies: omission, where someone holds out on the facts; commission, where someone states facts that are untrue; and paltering, where someone uses true facts to mislead you. It’s not always easy to detect, but there are a few telltale signs.

A recent study, published in the Journal of Personality and Social Psychology, suggests the practice of paltering is pretty common, especially among business executives. Not only that, but the people who do it don’t seem to think they’re doing anything wrong—despite the fact that most people feel like it’s just as unethical and untrustworthy as intentional lies of commission. It’s not just execs who do it, though. If you’ve ever tried to buy a used car from a slimy salesman, been in a salary negotiation with a tough as nails boss, or watched basically any presidential debate, you’ve definitely seen paltering in action.

Lifehacker

Cross-Cultural Study on Recognition of Emoticon’s shows that different cultures see emojis differently

Emoticons are getting more popular as the new communication channel to express feelings in online communication. Although familiarity to emoticons depends on cultures, how exposure matters in emotion recognition from emoticon is still open. To address this issue, we conducted a cross-cultural experimental study among Cameroon and Tanzania (hunter-gatherers, swidden farmers, pastoralists, and city dwellers) wherein people rarely experience emoticons and Japan wherein emoticons are popular. Emotional emoticons (e.g., ☺) as well as pictures of real faces were presented on a tablet device. The stimuli expressed a sad, neutral, or happy feeling. The participants rated the emotion of stimulus on a Sad–Happy Scale. We found that the emotion rating for the real faces was slightly different but similar among three cultural groups, which supported the “dialect” view of emotion recognition. Contrarily, while Japanese people were also sensitive to the emotion of emoticons, Cameroonian and Tanzanian people hardly read emotion from emoticons. These results suggested that the exposure to emoticons would shape the sensitivity to emotion recognition of emoticons, that is, ☺ does not necessarily look smiling to everyone.

Source: Is ☺ Smiling? Cross-Cultural Study on Recognition of Emoticon’s EmotionJournal of Cross-Cultural Psychology – Kohske Takahashi, Takanori Oishi, Masaki Shimada, 2017

alcohol hangover–a puzzling phenomenon

The alcohol hangover develops when blood alcohol concentration (BAC) returns to zero and is characterized by a feeling of general misery that may last more than 24 h. It comprises a variety of symptoms including drowsiness, concentration problems, dry mouth, dizziness, gastro-intestinal complaints, sweating, nausea, hyper-excitability, and anxiety. The alcohol hangover is an intriguing issue since it is unknown why these symptoms are present after alcohol and its metabolites are eliminated from the body.

Although numerous scientific papers cover the acute effects of alcohol consumption, researchers largely neglected the issue of alcohol hangover. This lack of scientific interest is remarkable, since almost everybody is familiar with the unpleasant hangover effects that may arise the day after an evening of excessive drinking, and with the ways these symptoms may affect performance of planned activities.

Many people favour the (unproven) popular belief that dehydration is the main cause of alcohol hangover symptoms. However, taking a closer look at the present research on biological changes during alcohol hangovers suggests otherwise.
[…]
nterestingly, no significant differences were found in absenteeism between workers reporting hangovers and those who did not. A possible explanation may be that workers with a hangover feel that having a hangover is ‘their own fault’, and the obligation they have to go to work may prevent calling sick. The fact that workers do go to work when having a hangover is of concern, especially since some in jobs making the wrong decisions may have serious consequences.

The article by Stephens and colleagues calls for additional hangover research, using more sophisticated research methods. In this context, researchers should ask themselves the question ‘ what is the alcohol hangover?’. It is evident that besides the alcohol amount many other factors play a role in determining the presence and severity of hangovers. To complicate matters, co-occurring dehydration and sleep deprivation have an impact on the next-day effect of excessive alcohol consumption as well. Until future research elucidates its pathology, the alcohol hangover remains a puzzling phenomenon.

Source: alcohol hangover–a puzzling phenomenon | Alcohol and Alcoholism | Oxford Academic

It turns out we don’t really know much about hangovers and it’s quite difficult to actually study them.

A Literal Tree Illustration Shows How Languages Are Connected

Did you know that most of the different languages we speak today can actually be placed in only a couple of groups by their origin? This is what illustrator Minna Sundberg has captured in an elegant infographic of a linguistic tree which reveals some fascinating links between different tongues.

Source: This Amazing Tree That Shows How Languages Are Connected Will Change The Way You See Our World

Flat UI Elements Attract Less Attention and Cause Uncertainty

In an eyetracking experiment comparing different clickability clues, weak and flat signifiers required more user effort than strong ones.
[…]
We conducted a quantitative experiment using eyetracking equipment and a desktop computer. We recruited 71 general web-users to participate in the experiment. Each participant was presented with one version of the 9 sites and given the corresponding task for that page. As soon as participants saw the target UI element that they wanted to click to complete the task, they said “I found it” and stopped.

We tracked the eye movements of the participants as they were performing these tasks. We measured the number of fixations on each page, as well as the task time. (A fixation happens when the gaze lingers on a spot of interest on the page).

Both of these measures reflect user effort: the more fixations and time spent doing the task, the higher the processing effort, and the more difficult the task. In addition, we created heatmap visualizations by aggregating the areas that participants looked at the most on the pages.
[…]
When we compared average number of fixations and average amount of time people spent looking at each page, we found that:

The average amount of time was significantly higher on the weak-signifier versions than the strong-signifier versions. On average participants spent 22% more time (i.e., slower task performance) looking at the pages with weak signifiers.
The average number of fixations was significantly higher on the weak-signifier versions than the strong-signifier versions. On average, people had 25% more fixations on the pages with weak signifiers.

(Both findings were significant by a paired t-test with sites as the random factor, p < 0.05.) This means that, when looking at a design with weak signifiers, users spent more time looking at the page, and they had to look at more elements on the page. Since this experiment used targeted findability tasks, more time and effort spent looking around the page are not good. These findings don’t mean that users were more “engaged” with the pages. Instead, they suggest that participants struggled to locate the element they wanted, or weren’t confident when they first saw it.

Source: Flat UI Elements Attract Less Attention and Cause Uncertainty

The open source community is nasty and that’s just the docs

The 2017 Open Source Survey was hosted on GitHub, which “collected responses from 5,500 randomly sampled respondents sourced from over 3,800 open source repositories” and then added “over 500 responses from a non-random sample of communities that work on other platforms.” The questionnaire was also made available in Traditional Chinese, Japanese, Spanish, and Russian.

Interestingly, those behind the survey broke out “negative incidents” into a separate spreadsheet in that trove. That data reveals that 18 per cent of open source contributors have “personally experienced a negative interaction with another user in open source”. Fully half of participants “have witnessed one between other people”.

Most of the negative behaviour is explained as “rudeness”, which has been experienced witnessed by 45 per cent of participants and experienced by 16 per cent. GitHub’s summary of the survey says really nasty stuff like “sexual advances, stalking, or doxxing are each encountered by less than five per cent of respondents and experienced by less than two per cent (but cumulatively witnessed by 14%, and experienced by three per cent).” Twenty five per cent of women respondents reported experiencing “language or content that makes them feel unwelcome”, compared to 15 per cent of men.

This stuff has consequences: 21 per cent of those who see negative behaviour bail from projects they were working on.

Source: The open source community is nasty and that’s just the docs

New Vampire Battery Technology Draws Energy Directly From Human Body

According to a research paper published earlier this month, the supercapacitor is made up by a device called a “harvester” that operates by using the body’s heat and movements to extract electrical charges from ions found in human body fluids, such as blood, serum, or urine.

As electrodes, the harvester uses a carbon nanomaterial called graphene, layered with modified human proteins. The electrodes collect energy from the human body, relay it to the harvester, which then stores it for later use.

Because graphene sheets can be drawn in sheets as thin as a few atoms, this allows for the creation of utra-thin supercapacitors that could be used as alternatives to classic batteries.

For example, the bio-friendly supercapacitors researchers created are thinner than a human hair, and are also flexible, moving and twisting with the human body.
[…]
Researchers argue that implantable medical devices using their supercapacitor could last a lifetime, and remove the need for patients to go through operations at regular periods to replace batteries, one of the main causes of complications with implantable medical devices.

Currently, the supercapacitor looks primed to be deployed with pacemakers, but researchers hope their technology could be used with other devices that stimulate other organs, such as the brain, the stomach, or the bladder.

Source: New Battery Technology Draws Energy Directly From Human Body