A little over a year ago, Caltech’s Lihong Wang developed the world’s fastest camera, a device capable of taking 10 trillion pictures per second. It is so fast that it can even capture light traveling in slow motion.
But sometimes just being quick is not enough. Indeed, not even the fastest camera can take pictures of things it cannot see. To that end, Wang, Bren Professor of Medical Engineering and Electrical Engineering, has developed a new camera that can take up to 1 trillion pictures per second of transparent objects. A paper about the camera appears in the January 17 issue of the journal Science Advances.
The camera technology, which Wang calls phase-sensitive compressed ultrafast photography (pCUP), can take video not just of transparent objects but also of more ephemeral things like shockwaves and possibly even of the signals that travel through neurons.
Wang explains that his new imaging system combines the high-speed photography system he previously developed with an old technology, phase-contrast microscopy, that was designed to allow better imaging of objects that are mostly transparent such as cells, which are mostly water.
[…]
Wang says the technology, though still early in its development, may ultimately have uses in many fields, including physics, biology, or chemistry.
“As signals travel through neurons, there is a minute dilation of nerve fibers that we hope to see. If we have a network of neurons, maybe we can see their communication in real time,” Wang says. In addition, he says, because temperature is known to change phase contrast, the system “may be able to image how a flame front spreads in a combustion chamber.”
A huge trash-collecting system designed to clean up plastic floating in the Pacific Ocean is finally picking up plastic, its inventor announced Wednesday.
The Netherlands-based nonprofit the Ocean Cleanup says its latest prototype was able to capture and hold debris ranging in size from huge, abandoned fishing gear, known as “ghost nets,” to tiny microplastics as small as 1 millimeter.
“Today, I am very proud to share with you that we are now catching plastics,” Ocean Cleanup founder and CEO Boyan Slat said at a news conference in Rotterdam.
The Ocean Cleanup system is a U-shaped barrier with a net-like skirt that hangs below the surface of the water. It moves with the current and collects faster moving plastics as they float by. Fish and other animals will be able to swim beneath it.
The new prototype added a parachute anchor to slow the system and increased the size of a cork line on top of the skirt to keep the plastic from washing over it.
The Ocean Cleanup’s System 001/B collects and holds plastic until a ship can collect it.
It’s been deployed in “The Great Pacific Garbage Patch” — a concentration of trash located between Hawaii and California that’s about double the size of Texas, or three times the size of France.
Ocean Cleanup plans to build a fleet of these devices, and predicts it will be able to reduce the size of the patch by half every five years.
Expert human pathologists typically require around 30 minutes to diagnose brain tumors from tissue samples extracted during surgery. A new artificially intelligent system can do it in less than 150 seconds—and it does so more accurately than its human counterparts.
New research published today in Nature Medicine describes a novel diagnostic technique that leverages the power of artificial intelligence with an advanced optical imaging technique. The system can perform rapid and accurate diagnoses of brain tumors in practically real time, while the patient is still on the operating table. In tests, the AI made diagnoses that were slightly more accurate than those made by human pathologists and in a fraction of the time. Excitingly, the new system could be used in settings where expert neurologists aren’t available, and it holds promise as a technique that could diagnose other forms of cancer as well.
[…]
New York University neuroscientist Daniel Orringer and his colleagues developed a diagnostic technique that combined a powerful new optical imaging technique, called stimulated Raman histology (SRH), with an artificially intelligent deep neural network. SRH uses scattered laser light to illuminate features not normally seen in standard imaging techniques
[…]
To create the deep neural network, the scientists trained the system on 2.5 million images taken from 415 patients. By the end of the training, the AI could categorize tissue into any of 13 common forms of brain tumors, such as malignant glioma, lymphoma, metastatic tumors, diffuse astrocytoma, and meningioma.
A clinical trial involving 278 brain tumor and epilepsy patients and three different medical institutions was then set up to test the efficacy of the system. SRH images were evaluated by either human experts or the AI. Looking at the results, the AI correctly identified the tumor 94.6 percent of the time, while the human neuropathologists were accurate 93.9 percent of the time. Interestingly, the errors made by humans were different than the errors made by the AI. This is actually good news, because it suggests the nature of the AI’s mistakes can be accounted for and corrected in the future, resulting in an even more accurate system, according to the authors.
“SRH will revolutionize the field of neuropathology by improving decision-making during surgery and providing expert-level assessment in the hospitals where trained neuropathologists are not available,” said Matija Snuderl, a co-author of the study and an associate professor at NYU Grossman School of Medicine, in the press release.
The most direct and strongest evidence for the accelerating universe with dark energy is provided by the distance measurements using type Ia supernovae (SN Ia) for the galaxies at high redshift. This result is based on the assumption that the corrected luminosity of SN Ia through the empirical standardization would not evolve with redshift.
New observations and analysis made by a team of astronomers at Yonsei University (Seoul, South Korea), together with their collaborators at Lyon University and KASI, show, however, that this key assumption is most likely in error. The team has performed very high-quality (signal-to-noise ratio ~175) spectroscopic observations to cover most of the reported nearby early-type host galaxies of SN Ia, from which they obtained the most direct and reliable measurements of population ages for these host galaxies. They find a significant correlation between SN luminosity and stellar population age at a 99.5 percent confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Since SN progenitors in host galaxies are getting younger with redshift (look-back time), this result inevitably indicates a serious systematic bias with redshift in SN cosmology. Taken at face values, the luminosity evolution of SN is significant enough to question the very existence of dark energy. When the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away (see Figure 1).
Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul), who led the project said, “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption.”
Other cosmological probes, such as the cosmic microwave background (CMB) and baryonic acoustic oscillations (BAO), are also known to provide some indirect and “circumstantial” evidence for dark energy, but it was recently suggested that CMB from Planck mission no longer supports the concordance cosmological model which may require new physics (Di Valentino, Melchiorri, & Silk 2019). Some investigators have also shown that BAO and other low-redshift cosmological probes can be consistent with a non-accelerating universe without dark energy (see, for example, Tutusaus et al. 2017). In this respect, the present result showing the luminosity evolution mimicking dark energy in SN cosmology is crucial and very timely.
This result is reminiscent of the famous Tinsley-Sandage debate in the 1970s on luminosity evolution in observational cosmology, which led to the termination of the Sandage project originally designed to determine the fate of the universe.
This work based on the team’s 9-year effort at Las Campanas Observatory 2.5-m telescope and at MMT 6.5-m telescope was presented at the 235th meeting of the American Astronomical Society held in Honolulu on January 5th (2:50 PM in cosmology session, presentation No. 153.05). Their paper is also accepted for publication in the Astrophysical Journal and will be published in January 2020 issue.
If you know nothing else about particle accelerators, you probably know that they’re big — sometimes miles long. But a new approach from Stanford researchers has led to an accelerator shorter from end to end than a human hair is wide.
The general idea behind particle accelerators is that they’re a long line of radiation emitters that smack the target particle with radiation at the exact right time to propel it forward a little faster than before. The problem is that depending on the radiation you use and the speed and resultant energy you want to produce, these things can get real big, real fast.
That also limits their applications; you can’t exactly put a particle accelerator in your lab or clinic if they’re half a kilometer long and take megawatts to run. Something smaller could be useful, even if it was nowhere near those power levels — and that’s what these Stanford scientists set out to make.
“We want to miniaturize accelerator technology in a way that makes it a more accessible research tool,” explained project lead Jelena Vuckovic in a Stanford news release.
But this wasn’t designed like a traditional particle accelerator like the Large Hadron Collider or one at collaborator SLAC’s National Accelerator Laboratory. Instead of engineering it from the bottom up, they fed their requirements to an “inverse design algorithm” that produced the kind of energy pattern they needed from the infrared radiation emitters they wanted to use.
That’s partly because infrared radiation has a much shorter wavelength than something like microwaves, meaning the mechanisms themselves can be made much smaller — perhaps too small to adequately design the ordinary way.
The algorithm’s solution to the team’s requirements led to an unusual structure that looks more like a Rorschach test than a particle accelerator. But these blobs and channels are precisely contoured to guide infrared laser light pulse in such a way that they push electrons along the center up to a significant proportion of the speed of light.
The resulting “accelerator on a chip” is only a few dozen microns across, making it comfortably smaller than a human hair and more than possible to stack a few on the head of a pin. A couple thousand of them, really.
And it will take a couple thousand to get the electrons up to the energy levels needed to be useful — but don’t worry, that’s all part of the plan. The chips are fully integrated but can be put in a series easily to create longer assemblies that produce larger powers.
These won’t be rivaling macro-size accelerators like SLAC’s or the Large Hadron Collider, but they could be much more useful for research and clinical applications where planet-destroying power levels aren’t required. For instance, a chip-sized electron accelerator might be able to direct radiation into a tumor surgically rather than through the skin.
n the 19th and early 20th centuries, millions of weather observations were carefully made in the logbooks of ships sailing through largely uncharted waters. Written in pen and ink, the logs recorded barometric pressure, air temperature, ice conditions and other variables. Today, volunteers from a project called Old Weather are transcribing these observations, which are fed into a huge dataset at the National Oceanic and Atmospheric Administration. This “weather time machine,” as NOAA puts it, can estimate what the weather was for every day back to 1836, improving our understanding of extreme weather events and the impacts of climate change.
The fossil fuels driving climate change make people sick, and so do impacts like extreme heat, wildfires, and more extreme storms, according to research published on Wednesday. In short, the climate crisis is a public health crisis.
A new report from premiere medical journal the Lancet tallies the medical toll of climate change and finds last year saw record-setting numbers of people exposed to heat waves and a near-record spread of dengue fever globally. The scientists also crunched numbers around wildfires for the first time, finding that 77 percent of countries are facing more wildfire-induced suffering than they were at the start of the decade. But while some of the report’s findings are rage-inducing, it also shows that improving access to healthcare may be among the most beneficial ways we can adapt to climate change.
[…]
Heat waves are among the more obvious climate change-linked weather disasters, and the report outlines just how much they’re already hurting the world. Last year saw intense heat waves go off around the world from the UK to Pakistan, to Japan amid the fourth warmest year on record.
[…]
The report also found that 2018 marked the second-worst year since accurate record keeping began in 1990 for the spread of dengue fever-carrying mosquitoes. The two types of mosquitoes that transmit dengue have seen their range expand as temperatures have warmed
[…]
wildfire findings, which are new to this year’s report. Scientists found that more than three-quarters of countries around the world are seeing increased prevalence of wildfires and the sickness-inducing smoke that accompanies them.
[…]
there are also the health risks that come from burning fossil fuels themselves. Air pollution has ended up in people’s lungs where it can cause asthma and other respiratory issues, but it’s also showed up in less obvious locations like people’s brains and women’s placentas.
[…]
“We can do better than to dwell on the problem,” Gina McCarthy, the former head of the Environmental Protection Agency and current Harvard public health professor, said on the press call.
The report found, for example, that despite an uptick in heat waves and heavy downpours that can spur diarrheal diseases, outbreaks have become less common. Ditto for protein-related malnutrition despite the impact intense heat is having on the nutritional value of staple crops and ocean heat waves on coral reefs and fisheries that rely on them. At least some of that is attributable to improved access to healthcare, socioeconomic opportunities, and sanitation in some regions.
We often think about sea walls or other hard infrastructure when it comes to climate adaptation. But rural health clinics and sewer systems fall into that same category, as do programs like affordable crop insurance. The report suggests improving access to financing health-focused climate projects could pay huge dividends as a result, ensuring that people are insulated from the impacts of climate change and helping lift them out of poverty in the process. Of course it also calls for cutting carbon pollution ASAP because even the best equipped hospital in the world isn’t going to be enough to protect people from the full impacts of climate change.
Right now, in the Netherlands there is talk about reducing the speed limit from 130kph to 100kph in order to comply to emissions goals set by the EU (and supported by NL) years ago. Because NL didn’t put into effect any necessary legislation years ago, this is now coming to bite NL in the arse and they are playing panic football.
The Dutch institute for the environment shows pretty clearly where emissions are coming from:
As you can see it makes perfect sense to do something about traffic, as it causes 6.1% of emissions. Oh wait, there’s the farming sector: that causes 46% of emissions! Why not tackle that? Well, they tried to at first, but then the farmers did an occupy of the Hague with loads of tractors (twice) and all the politicians chickened out. Because nothing determines policy like a bunch of tractors causing traffic jams. Screw the will of the people anyway.
Note: emissions expressed
relative to their values at 100 km/h, for which the value ‘1’ is assigned.
Source: EMISIA – ETC/ACM
So reducing speed from 120-100 kph should result (for diesels) in an approx 15% decrease in particulate matter, a 40% decrease in nitrogen oxides but an increase in the amount of total hydrocarbons and carbon monoxides.
For gasoline powered cars the it’s a 20% decrease in total hydrocarbons, which means that in NL, we can knock down the 6.1% of the pie generated by cars to around 4%. Yay. We don’t win much.
Now about traffic flow, because that’s what I’m here for. The Dutch claim that lowering the speed limit will decrease the amount of time spent in traffic jams. Here’s an example of two experts saying so in BNN Vara’s article Experts: Door verlaging maximumsnelheid ben je juist sneller thuis
However, if you look at their conclusion, they come straight out of one of just two studies commonly used by seemingly everyone:
It is confirmed that the lower the speed limit, the higher the occupancy to achieve a given flow. This result has been observed even for relatively high flows and low speed limits. For instance, a stable flow of 1942 veh/h/lane has been measured with the 40 km/h speed limit in force. The corresponding occupancy was 33%, doubling the typical occupancy for this flow in the absence of speed limits. This means that VSL strategies aiming to restrict the mainline flow on a freeway by using low speed limits will need to be applied carefully, avoiding conditions as the ones presented here, where speed limits have a reduced ability to limit flows. On the other hand, VSL strategies trying to get the most from the increased vehicle storage capacity of freeways under low speed limits might be rather promising. Additionally, results show that lower speed limits increase the speed differences across lanes for moderate demands. This, in turn, also increases the lane changing rate. This means that VSL strategies aiming to homogenize traffic and reduce lane changing activity might not be successful when adopting such low speed limits. In contrast, lower speed limits widen the range of flows under uniform lane flow distributions, so that, even for moderate to low demands, the under-utilization of any lane is avoided.
There are a few problems with this study: First, it’s talking about speed limits of 40, 60 and 80 kph. Nothing around the 100 – 130kph mark. Secondly, the data in the graphs actually shows a lower occupancy with a higher speed limit – which is not their conclusion!
This paper aims to evaluate optimal speed limits in traffic networks in a way that economized societal costs are incurred. In this study, experimental and field data as well as data from simulations are used to determine how speed is related to the emission of pollutants, fuel consumption, travel time, and the number of accidents. This paper also proposes a simple model to calculate the societal costs of travel and relate them to speed. As a case study, using emission test results on cars manufactured domestically and by simulating the suburban traffic flow by Aimsun software, the total societal costs of the Shiraz-Marvdasht motorway, which is one of the most traversed routes in Iran, have been estimated. The results of the study show that from a societal perspective, the optimal speed would be 73 km/h, and from a road user perspective, it would be 82 km/h (in 2011, the average speed of the passing vehicles on that motorway was 82 km/h). The experiments in this paper were run on three different vehicles with different types of fuel. In a comparative study, the results show that the calculated speed limit is lower than the optimal speed limits in Sweden, Norway, and Australia.
(Emphasis mine)
It’s a compelling study with great results, which also include accidents.
In a multi-lane motorway divided by a median barrier in Sweden, the optimal speed is 110 km/h. The speed limit is 110 km/h and the current average speed is 109 km/h. In Norway, the optimal speed from a societal perspective is 100 km/h and the speed limit is 90 km/h. The current average speed is 95 km/h [2]. In Australia, the optimum speeds on rural freeways (dual carriageway roads with grade-separated intersections) would be 110 km/h [3]. Table 3 compares the results in Elvik [2] and Cameron [3] with those of the present study.
Table 3. Optimal speed in Norway, Sweden, Australia, and Iran. Source for columns 2 and 3: Elvik [2]. Source for column 4: Cameron [3].
Norway
Sweden
Australia
Iran
Optimal speed limits (km/h) according to societal perspective
100
110
110
73
Optimal speed limits (km/h) according to road user perspective
110
120
–
82
Current speed limits (km/h)
90
110
110
110
Current mean speed of travel (km/h)
95
109
–
82
There is a significant difference between the results in Iran and those in Sweden, Norway, and Australia; this difference results from the difference in the costs between Iran and these three countries. Also, the functions of fuel consumption and pollutant emission are different.
If you look at the first graph, you can be forgiven for thinking that the optimum speed is 95 kph, as Ruud Horman (from the BNN Vara piece) seems to think. However, as the author of this study is very careful to point out, it’s a very constrained study and there are per country differences – these results are only any good for a very specific highway in a very specific country.
They come out with a whole load of pretty pictures based on the following graph:
x= intensity, y= speed.
There are quite a lot of graphs like this. So, the speed limit is 120kph (red dots) and the inttesity is 6000 (heavy) then the actual speed is likely to be around 100 kph op the A16. However if the speed limit is 130 kph with the same intensity – oh wait, it doesn’t get to the same intensity. You seem to have higher intensities more often with a speed limit of 120 kph. But if we have an intensity of around 3000 (which I guess is moderate) then you see that quite often the speed is 125 with a speed limit of 130 and around 100 with a speed limit of 120. However, with that intensity you see that there are slightly more datapoints at around 20 – 50 kph if your speed limit is 130kph than if it’s 120kph.
Oddly enough, they never added data from 100kph, of which there were (and are) plenty of roads. They also never take into account variable speed limits. The 120kph limit is based on data taken in 2012 and the 130kph limit is based on data from 2018.
Their conclusion – raising the speed limit wins you time when the roads are quiet and puts you into a traffic jam when the roads are busy – is spurious and lacks the data to be able to support it.
The conclusion is pretty tough reading but the graphs are quite clear
What they are basically saying is: we researched it pretty well and we had a look at the distribution of vehicle types. Basically, if you set a higher speed limit, people will drive faster. There is variability (the bars you see up and down the lines) so sometimes they will drive faster and somethims they will drive slower but they generally go faster on average with a higher speed limit.
Now one more argument is that the average commute is only about an hour per day. So if you go slower, you will only save a few minutes. The difference between 100 and 130kph is a 30% difference. Over an hour period (say 100 km), that’s a 21 minute difference, assuming you can travel that distance at that speed (what they call free flow conditions). Sure you’ll never get that, but over large distances you can come close. Anyway, say we halve that and say it’s a 10 minute difference. The argument becomes that this is just barely a cup of tea. But it’s 10 minutes difference EVERY WORKING DAY! Excluding weekends and holidays, you can expect to make that commute around 250 times per year, making your net loss 2500 minutes (at least), which is 41 hours or a full working week you now have to spend extra in the car!
– reducing the speed limit seems like poor populist policy to appease the farmers, look like Something is Being Done ™ and not actually get anything real to happen except piss off commuters.
The first human vaccine against the often-fatal viral disease Ebola is now an official reality. On Monday, the European Union approved a vaccine developed by the pharmaceutical company Merck, called Ervebo.
The stage for Ervebo’s approval was set this October, when a committee assembled by the European Medicines Agency (EMA) recommended a conditional marketing authorization for the vaccine by the EU. Conditional marketing authorizations are given to new drugs or therapies that address an “unmet medical need” for patients. These drugs are approved on a quicker schedule than the typical new drug and require less clinical trial data to be collected and analyzed for approval.
In Ervebo’s case, though, the data so far seems to be overwhelmingly positive. In April, the World Health Organization revealed the preliminary results of its “ring vaccination” trials with Ervebo during the current Ebola outbreak in the Democratic Republic of Congo. Out of the nearly 100,000 people vaccinated up until that time, less than 3 percent went on to develop Ebola. These results, coupled with earlier trials dating back to the historic 2014-2015 outbreak of Ebola that killed over 10,000 people, secured Ervebo’s approval by the committee.
“Finding a vaccine as soon as possible against this terrible virus has been a priority for the international community ever since Ebola hit West Africa five years ago,” Vytenis Andriukaitis, commissioner in charge of Health and Food Safety at the EU’s European Commission, said in a statement announcing the approval. “Today’s decision is therefore a major step forward in saving lives in Africa and beyond.”
Although the marketing rights for Ervebo are held by Merck, it was originally developed by researchers from the Public Health Agency of Canada, which still maintains non-commercial rights.
The vaccine’s approval, significant as it is, won’t tangibly change things on the ground anytime soon. In October, the WHO said that licensed doses of Ervebo will not be available to the world until the middle of 2020. In the meantime, people in vulnerable areas will still have access to the vaccine through the current experimental program. Although Merck has also submitted Ervebo for approval by the Food and Drug Administration in the U.S., the agency’s final decision isn’t expected until next year as well.
New research from a duo of environmental engineers at Drexel University is suggesting the decades-old claim that house plants improve indoor air quality is entirely wrong. Evaluating 30 years of studies, the research concludes it would take hundreds of plants in a small space to even come close to the air purifying effects of simply opening a couple of windows.
Back in 1989 an incredibly influential NASA study discovered a number of common indoor plants could effectively remove volatile organic compounds (VOCs) from the air. The experiment, ostensibly conducted to investigate whether plants could assist in purifying the air on space stations, gave birth to the idea of plants in home and office environments helping clear the air.
Since then, a number of experimental studies have seemed to verify NASA’a findings that plants do remove VOCs from indoor environments. Professor of architectural and environmental engineering at Drexel University Michael Waring, and one of his PhD students, Bryan Cummings, were skeptical of this common consensus. The problem they saw was that the vast majority of these experiments were not conducted in real-world environments.
“Typical for these studies a potted plant was placed in a sealed chamber (often with a volume of a cubic meter or smaller), into which a single VOC was injected, and its decay was tracked over the course of many hours or days,” the duo writes in their study.
To better understand exactly how well potted plants can remove VOCs from indoor environments, the researchers reviewed the data from a dozen published experiments. They evaluated the efficacy of a plant’s ability to remove VOCs from the air using a metric called CADR, or clean air delivery rate.
“The CADR is the standard metric used for scientific study of the impacts of air purifiers on indoor environments,” says Waring, “but many of the researchers conducting these studies were not looking at them from an environmental engineering perspective and did not understand how building air exchange rates interplay with the plants to affect indoor air quality.”
Once the researchers had calculated the rate at which plants dissipated VOCs in each study they quickly discovered that the effect of plants on air quality in real-world scenarios was essentially irrelevant. Air handling systems in big buildings were found to be significantly more effective in dissipating VOCs in indoor environments. In fact, to clear VOCs from just one square meter (10.7 sq ft) of floor space would take up to 1,000 plants, or just the standard outdoor-to-indoor air exchange systems that already exist in most large buildings.
Last month was the hottest ever October on record globally, according to data released Friday by the Copernicus Climate Change Service, an organization that tracks global temperatures. The month, which was reportedly 1.24 degrees Fahrenheit warmer than the average October from 1981-2010, narrowly beat October 2015 for the top spot.
According to Copernicus, most of Europe, large parts of the Arctic and the eastern U.S. and Canada were most affected. The Middle East, much of Africa, southern Brazil, Australia, eastern Antarctica and Russia also experienced above-average temperatures.
Parts of tropical Africa and Antarctica and the western U.S. and Canada felt much colder than usual, however.
It only Tuesday, but more than 11,000 scientists around the world have come together to declare a climate emergency. Their paper, published Tuesday in the journal Bioscience, lays out the science behind this emergency and solutions for how we can deal with it.
Scientists aren’t the first people to make this declaration. A tribal nation in the Canadian Yukon, the U.K., and parts of Australia have all come to the same grim conclusion. In the U.S., members of Congress have pushed the federal government to do the same, but y’know, we got Donald Trump. Ain’t shit happening with this fool in office. Anyway, this proclamation from scientists is significant because they’re not doing it out of a political agenda or as an emotional outcry. They’re declaring a climate emergency because the science supports it.
The signatories, who come from 153 countries, note that societies have taken little action to prevent climate disaster. It’s been business as usual, despite scientific consensus that burning fossil fuels and driving cars is gravely harming the environment—you know, the environment we all have to live in for the foreseeable future. Greenhouse gas emissions continue to enter the atmosphere, and if we don’t stop quickly, we’re doomed.
The internet has made it easier than ever to reach a lot of readers quickly. It has birthed new venues for publication and expanded old ones. At the same time, a sense of urgency of current affairs, from politics to science, technology to the arts, has driven new interest in bringing scholarship to the public directly.
Scholars still have a lot of anxiety about this practice. Many of those relate to the university careers and workplaces: evaluation, tenure, reactions from their peers, hallway jealousy, and so on. These are real worries, and as a scholar and university professor myself, I empathize with many of them.
But not with this one: The worry that they’ll have to “dumb down” their work to reach broader audiences. This is one of the most common concerns I hear from academics. “Do we want to dumb down our work to reach these readers?” I’ve heard them ask among themselves. It’s a wrongheaded anxiety.
Like all experts, academics are used to speaking to a specialized audience. That’s true no matter their discipline, from sociology to geotechnical engineering to classics. When you speak to a niche audience among peers, a lot of understanding comes for free. You can use technical language, make presumptions about prior knowledge, and assume common goals or contexts. When speaking to a general audience, you can’t take those circumstances as a given.
But why would doing otherwise mean “dumbing down” the message? It’s an odd idea when you think about it. The whole reason to reach people who don’t know what you know, as an expert, is so that they might know about it. Giving them reason to care, process, and understand is precisely the point.
The phrase dumbing down got its start in entertainment. During the golden age of Hollywood, in the 1930s, dumbing down became a screenwriter’s shorthand for making an idea simple enough that people with limited education or experience could understand it. Over time, it came to refer to intellectual oversimplification of all kinds, particularly in the interest of making something coarsely popular. In education, it named a worry about curricula and policy: that students were being asked to do less, held to a lower standard than necessary—than they were capable of—and that is necessary to produce an informed citizenry.
In the process, dumbing down has entrenched and spread as a lamentation, often well beyond any justification
[…]
But to assume that even to ponder sharing the results of scholarship amounts to dumbing down, by default, is a new low in this term for new lows. Posturing as if it’s a problem with the audience, rather than with the expert who refuses to address that audience, is perverse.
One thing you learn when writing for an audience outside your expertise is that, contrary to the assumption that people might prefer the easiest answers, they are all thoughtful and curious about topics of every kind. After all, people have areas in their own lives in which they are the experts. Everyone is capable of deep understanding.
Up to a point, though: People are also busy, and they need you to help them understand why they should care. Doing that work—showing someone why a topic you know a lot about is interesting and important—is not “dumb”; it’s smart. Especially if, in the next breath, you’re also intoning about how important that knowledge is, as academics sometimes do. If information is vital to human flourishing but withheld by experts, then those experts are either overestimating its importance or hoarding it.
The U.S. is slowly being gripped by a flooding crisis as seas rise and waterways overflow with ever more alarming frequency. An idea at the forefront for how to help Americans cope is so-called managed retreat, a process of moving away from affected areas and letting former neighborhoods return to nature. It’s an idea increasingly en vogue as it becomes clearer that barriers won’t be enough to keep floodwaters at bay.
But new research shows a startling finding: Americans are already retreating. More than 40,000 households have been bought out by the federal government over the past three decades. The research published in Science Advances on Wednesday also reveals that there are disparities between which communities opt-in for buyout programs and, even more granularly, which households take the offers and relocate away. The cutting-edge research answers questions that have been out there for a while and raises a whole host of new ones that will only become more pressing in the coming decades as Earth continues to warm.
“People are using buyouts and doing managed retreat,” AR Siders, a climate governance researcher at Harvard and study author, said during a press call. “No matter how difficult managed retreat sounds, we know that there are a thousand communities in the United States, all over the country, who have made it work. I want to hear their stories, I want to know how they did it.”
“The anti-climate effort has been largely underwritten by conservative billionaires,” says the Guardian, “often working through secretive funding networks. They have displaced corporations as the prime supporters of 91 think tanks, advocacy groups and industry associations which have worked to block action on climate change.”
Rapid progress in research involving miniature human brains grown in a dish has led to a host of ethical concerns, particularly when these human brain cells are transplanted into nonhuman animals. A new paper evaluates the potential risks of creating “humanized” animals, while providing a pathway for scientists to move forward in this important area.
Neuroscientist Isaac Chen from the Perelman School of Medicine at the University of Pennsylvania, along with his colleagues, has written a timely Perspective paper published today in the science journal Cell Stem Cell. The paper was prompted by recent breakthroughs involving the transplantation of human brain organoids into rodents—a practice that’s led to concerns about the “humanization” of lab animals.
In their paper, the authors evaluate the current limits of this biotechnology and the potential risks involved, while also looking ahead to the future. Chen and his colleagues don’t believe anything needs to be done right now to limit these sorts of experiments, but that could change once scientists start to enhance certain types of brain functions in chimeric animals, that is, animals endowed with human attributes, in this case human brain cells.
In the future, the authors said, scientists will need to be wary of inducing robust levels of consciousness in chimeric animals and even stand-alone brain organoids, similar to the sci-fi image of a conscious brain in a vat.
Cross-section of a brain organoid.
Image: Trujillo et al., 2019, Cell Stem Cell
Human brain organoids are proving to be remarkably useful. Made from human stem cells, brain organoids are tiny clumps of neural cells which scientists can use in their research.
To be clear, pea-sized organoids are far too basic to induce traits like consciousness, feelings, or any semblance of awareness, but because they consist of living human brain cells, scientists can use them to study brain development, cognitive disorders, and the way certain diseases affect the brain, among other things. And in fact, during the opening stages of the Zika outbreak, brain organoids were used to study how the virus infiltrates brain cells.
The use of brain organoids in this way is largely uncontroversial, but recent research involving the transplantation of human brain cells into rodent brains is leading to some serious ethical concerns, specifically the claim that scientists are creating part-human animals.
Anders Sandberg, a researcher at the University of Oxford’s Future of Humanity Institute, said scientists are not yet able to generate full-sized brains due to the lack of blood vessels, supporting structure, and other elements required to build a fully functioning brain. But that’s where lab animals can come in handy.
“Making organoids of human brain cells is obviously interesting both for regenerating brain damage and for research,” explained Sandberg, who’s not affiliated with the new paper. “They do gain some structure, even though it is not like a full brain or even part of a brain. One way of getting around the problem of the lack of blood vessels in a petri dish is to implant them in an animal,” he said. “But it’s at this point when people start to get a bit nervous.”
The concern, of course, is that the human neural cells, when transplanted into a nonhuman animal, say a mouse or rat, will somehow endow the creature with human-like traits, such as greater intelligence, more complex emotions, and so on.
The next time you’re hunting for a parking spot, mathematics could help you identify the most efficient strategy, according to a recent paper in the Journal of Statistical Mechanics. It’s basically an optimization problem: weighing different variables and crunching the numbers to find the optimal combination of those factors. In the case of where to put your car, the goal is to strike the optimal balance of parking close to the target—a building entrance, for example—without having to waste too much time circling the lot hunting for the closest space.
Paul Krapivsky of Boston University and Sidney Redner of the Santa Fe Institute decided to build their analysis around an idealized parking lot with a single row (a semi-infinite line), and they focused on three basic parking strategies. A driver who employs a “meek” strategy will take the first available spot, preferring to park as quickly as possible even if there might be open spots closer to the entrance. A driver employing an “optimistic” strategy will go right to the entrance and then backtrack to find the closest possible spot.
Finally, drivers implementing a “prudent” strategy will split the difference. They might not grab the first available spot, figuring there will be at least one more open spot a bit closer to the entrance. If there isn’t, they will backtrack to the space a meek driver would have claimed immediately.
[…]
Based on their model, the scientists concluded that the meek strategy is the least effective of the three, calling it “risibly inefficient” because “many good parking spots are unfilled and most cars are parked far from the target.”
Determining whether the optimistic or prudent strategy was preferable proved trickier, so they introduced a cost variable. They defined it as “the distance from the parking spot to the target plus time wasted looking for a parking spot.” Their model also assumes the speed of the car in the lot is the same as average walking speed.
“On average, the prudent strategy is less costly,” the authors concluded. “Thus, even though the prudent strategy does not allow the driver to take advantage of the presence of many prime parking spots close to the target, the backtracking that must always occur in the optimistic strategy outweighs the benefit.” Plenty of people might indeed decide that walking a bit farther is an acceptable tradeoff to avoid endlessly circling a crowded lot hunting for an elusive closer space. Or maybe they just want to rack up a few extra steps on their FitBit.
The authors acknowledge some caveats to their findings. This is a “minimalist physics-based” model, unlike more complicated models used in transportation studies that incorporate factors like parking costs, time limits, and so forth. And most parking lots are not one-dimensional (a single row). The model used by the authors also assumes that cars enter the lot from the right at a fixed rate, and every car will have time to find a spot before the next car enters—a highly unrealistic scenario where there is no competition between cars for a given space. (Oh, if only…)
A man has been able to move all four of his paralysed limbs with a mind-controlled exoskeleton suit, French researchers report.
Thibault, 30, said taking his first steps in the suit felt like being the “first man on the Moon”.
His movements, particularly walking, are far from perfect and the robo-suit is being used only in the lab.
But researchers say the approach could one day improve patients’ quality of life.
Thibault had surgery to place two implants on the surface of the brain, covering the parts of the brain that control movement
Sixty-four electrodes on each implant read the brain activity and beam the instructions to a nearby computer
Sophisticated computer software reads the brainwaves and turns them into instructions for controlling the exoskeleton
[…]
in 2017, he took part in the exoskeleton trial with Clinatec and the University of Grenoble.
Initially he practised using the brain implants to control a virtual character, or avatar, in a computer game, then he moved on to walking in the suit.
Media captionMind-controlled exoskeleton allows paralysed 30-year-old man to walk in French lab
“It was like [being the] first man on the Moon. I didn’t walk for two years. I forgot what it is to stand, I forgot I was taller than a lot of people in the room,” he said.
It took a lot longer to learn how to control the arms.
“It was very difficult because it is a combination of multiple muscles and movements. This is the most impressive thing I do with the exoskeleton.”
[…]
“This is far from autonomous walking,” Prof Alim-Louis Benabid, the president of the Clinatec executive board, told BBC News.
[…]
In tasks where Thibault had to touch specific targets by using the exoskeleton to move his upper and lower arms and rotate his wrists, he was successful 71% of the time.
Prof Benabid, who developed deep brain stimulation for Parkinson’s disease, told the BBC: “We have solved the problem and shown the principle is correct. This is proof we can extend the mobility of patients in an exoskeleton.
[…]
At the moment they are limited by the amount of data they can read from the brain, send to a computer, interpret and send to the exoskeleton in real-time.
They have 350 milliseconds to go from thought to movement otherwise the system becomes difficult to control.
It means out of the 64 electrodes on each implant, the researchers are using only 32.
So there is still the potential to read the brain in more detail using more powerful computers and AI to interpret the information from the brain.
Scientists have discovered nitrogen- and oxygen- containing organic molecules in ice grains blown out by Saturn’s moon Enceladus, according to a new study.
Gas giants Saturn and Jupiter are orbited by some moons that almost seem more like planets themselves. One such moon is Saturn’s Enceladus, an icy orb thought to contain a very deep subsurface water ocean beneath a thick icy crust. Finding organic molecules on Enceladus is exciting, since water plus energy plus organic molecules might be the ingredients for life.
Enceladus blasted the material out in plumes from cracks in its south polar crust. The plumes carry a mixture of material from the moon’s rocky core and subsurface ocean. The Cassini mission flew through these plumes in 2004 and 2008, gathering data on the material with two of its instruments, the Ion and Neutral Mass Spectrometer (INMS) and the Cosmic Dust Analyser (CDA). For the new study, researchers based in Germany and the United States took a deeper look at the CDA’s data and found new organic compounds, according to the paper published in the Monthly Notices of the Royal Astronomical Society.
The molecules included amines, which are nitrogen- and oxygen-containing organic molecules similar to those on Earth that turn into amino acids. As a reminder, “organic” in this case simply means “containing carbon,” though these are the kind of compounds that can produce the complex molecules found in life on Earth.
[…]
Scientists have previously reported finding large organic molecules in Cassini data. This paper presents a new kind of molecule, one of interest to those hunting for life.
Their initial discovery had seemed like a contradiction because most other polymer fibres embrittle in the cold. But after many years of working on the problem, the group of researchers have discovered that silk’s cryogenic toughness is based on its nano-scale fibrills. Sub-microscopic order and hierarchy allows a silk to withstand temperatures of down to -200 C. And possibly even lower, which would make these classic natural luxury fibres ideal for applications in the depths of chilly outer-space.
The interdisciplinary team examined the behaviour and function of several animal silks cooled down to liquid nitrogen temperature of -196 C. The fibres included spider silks but the study focused on the thicker and much more commercial fibres of the wild silkworm Antheraea pernyi.
In an article published today in Materials Chemistry Frontiers, the team was able to show not only ‘that’ but also ‘how’ silk increases its toughness under conditions where most materials would become very brittle. Indeed, silk seems to contradict the fundamental understanding of polymer science by not losing but gaining quality under really cold conditions by becoming both stronger and more stretchable. This study examines the ‘how’ and explains the ‘why’. It turns out that the underlying processes rely on the many nano-sized fibrils that make up the core of a silk fibre.
[…]
It would appear that this study has far-reaching implications by suggesting a broad spectrum of novel applications for silks ranging from new materials for use in Earth’s polar regions to novel composites for light-weight aeroplanes and kites flying in the strato- and meso-sphere to, perhaps, even giant webs spun by robot spiders to catch astro-junk in space.
Global shipping companies have spent billions rigging vessels with “cheat devices” that circumvent new environmental legislation by dumping pollution into the sea instead of the air, The Independent can reveal.
More than $12bn (£9.7bn) has been spent on the devices, known as open-loop scrubbers, which extract sulphur from the exhaust fumes of ships that run on heavy fuel oil.
This means the vessels meet standards demanded by the International Maritime Organisation (IMO) that kick in on 1 January.
However, the sulphur emitted by the ships is simply re-routed from the exhaust and expelled into the water around the ships, which not only greatly increases the volume of pollutants being pumped into the sea, but also increases carbon dioxide emissions.
The change could have a devastating effect on wildlife in British waters and around the world, experts have warned.
Researchers at Chalmers University of Technology, Sweden, have disproved the prevailing theory of how DNA binds itself. It is not, as is generally believed, hydrogen bonds which bind together the two sides of the DNA structure. Instead, water is the key. The discovery opens doors for new understanding in research in medicine and life sciences. The findings are published in PNAS.
DNA is constructed of two strands consisting of sugar molecules and phosphate groups. Between these two strands are nitrogen bases, the compounds that make up genes, with hydrogen bonds between them. Until now, it was commonly thought that those hydrogen bonds held the two strands together.
But now, researchers from Chalmers University of Technology show that the secret to DNA’s helical structure may be that the molecules have a hydrophobic interior, in an environment consisting mainly of water. The environment is therefore hydrophilic, while the DNA molecules’ nitrogen bases are hydrophobic, pushing away the surrounding water. When hydrophobic units are in a hydrophilic environment, they group together to minimize their exposure to the water.
[…]
e have also shown that DNA behaves totally differently in a hydrophobic environment. This could help us to understand DNA, and how it repairs. Nobody has previously placed DNA in a hydrophobic environment like this and studied how it behaves, so it’s not surprising that nobody has discovered this until now.”
The researchers also studied how DNA behaves in an environment that is more hydrophobic than normal, a method they were the first to experiment with. They used the hydrophobic solution polyethylene glycol, and changed the DNA’s surroundings step-by-step from the naturally hydrophilic environment to a hydrophobic one. They aimed to discover if there is a limit where DNA starts to lose its structure, when the DNA does not have a reason to bind, because the environment is no longer hydrophilic. The researchers observed that when the solution reached the borderline between hydrophilic and hydrophobic, the DNA molecules’ characteristic spiral form started to unravel.
Upon closer inspection, they observed that when the base pairs split from one another (due to external influence, or simply from random movements), holes are formed in the structure, allowing water to leak in. Because DNA wants to keep its interior dry, it presses together, with the base pairs coming together again to squeeze out the water. In a hydrophobic environment, this water is missing, so the holes stay in place.
“Hydrophobic catalysis and a potential biological role of DNA unstacking induced by environment effects” is published in Proceedings of the National Academy of Sciences (PNAS).
There’s been a lot of research into how to give robots and prosthesis wearers a sense of touch, but it has focused largely on the hands. Now, researchers led by ETH Zurich want to restore sensory feedback for leg amputees, too. In a paper published in Nature Medicine today, the team describes how they modified an off-the-shelf prosthetic leg with sensors and electrodes to give wearers a sense of knee movement and feedback from the sole of the foot on the ground. While their initial sample size was small — just two users — the results are promising.
The researchers worked with two patients with above-the-knee, or transfemoral, amputations. They used an Össur prosthetic leg, which comes with a microprocessor and an angle sensor in the knee joint, IEEE Spectrum explains. The team then added an insole with seven sensors to the foot. Those sensors transmit signals in real-time, via Bluetooth to a controller strapped to the user’s ankle. An algorithm in the controller encodes the feedback into neural signals and delivers that to a small implant in the patient’s tibial nerve, at the back of the thigh. The brain can then interpret those signals as feedback from the knee and foot.
The modified prosthetic helped the users walk faster, feel more confident and consume less oxygen — an indication that it was less strenuous than traditional prosthesis. The team also tested activating the tibial nerve implant to relieve phantom limb pain. Both patients saw a significant reduction in pain after a few minutes of electrical stimulation, but they had to be connected to a device in a lab to receive the treatment. With more testing, the researchers hope they might be able to bring these technologies to more amputees and make both available outside of the lab.
A team of researchers from Zhejiang University and Xiamen University has found a way to repair human tooth enamel. In their paper published in the journal Science Advances, the group describes their process and how well it worked when tested.
[…]
the researchers first created extremely tiny (1.5-nanometer diameter) clusters of calcium phosphate, the main ingredient of natural enamel. Each of the tiny clusters was then prepared with the chemical compound triethylamine—doing so prevented the clusters from clumping together. The clusters were then mixed with a gel that was applied to a sample of crystalline hydroxyapatite—a material very similar to human enamel. Testing showed that the clusters fused with the tooth stand-in, and in doing so, created a layer that covered the sample. They further report that the layer was much more tightly arranged than prior teams had achieved with similar work. They claim that such tightness allowed the new material to fuse with the old as a single layer, rather than multiple crystalline areas.
The team then carried out the same type of testing using real human teeth that had been treated with acid to remove the enamel. They report that within 48 hours of application, crystalline layers of approximately 2.7 micrometers had formed on the teeth. Close examination with a microscope showed that the layer had a fish-scale like structure very similar to that of natural enamel. Physical testing showed the enamel to be nearly identical to natural enamel in strength and wear resistance.
The researchers note that more work is required before their technique can be used by dentists—primarily to make sure that it does not have any undesirable side effects.