The Linkielist

Linking ideas with the world

The Linkielist

About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Mozilla says Firefox won’t defang ad blockers – unlike Google Chrome, which is steadily removing your privacy from 3rd parties

On Tuesday, Mozilla said it is not planning to change the ad-and-content blocking capabilities of Firefox to match what Google is doing in Chrome.

Google’s plan to revise its browser extension APIs, known as Manifest v3, follows from the web giant’s recognition that many of its products and services can be abused by unscrupulous developers. The search king refers to its product security and privacy audit as Project Strobe, “a root-and-branch review of third-party developer access to your Google account and Android device data.”

In a Chrome extension, the manifest file (manifest.json) tells the browser which files and capabilities (APIs) will be used. Manifest v3, proposed last year and still being hammered out, will alter and limit the capabilities available to extensions.

Developers who created extensions under Manifest v2 may have to revise their code to keep it working with future versions of Chrome. That may not be practical or possible in all cases, though. The developer of uBlock Origin, Raymond Hill, has said his web-ad-and-content-blocking extension will break under Manifest v3. It’s not yet clear whether uBlock Origin can or will be adapted to the revised API.

The most significant change under Manifest v3 is the deprecation of the blocking webRequest API (except for enterprise users), which lets extensions intercept incoming and outgoing browser data, so that the traffic can be modified, redirected or blocked.

Firefox not following

“In its place, Google has proposed an API called declarativeNetRequest,” explains Caitlin Neiman, community manager for Mozilla Add-ons (extensions), in a blog post.

“This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.”

Mozilla offers Firefox developers the Web Extensions API, which is mostly compatible with the Chrome extensions platform and is supported by Chromium-based browsers Brave, Opera and Vivaldi. Those other three browser makers have said they intend to work around Google’s changes to the blocking webRequest API. Now, Mozilla says as much.

“We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” said Neiman.

[…]

Google maintains, “We are not preventing the development of ad blockers or stopping users from blocking ads,” even as it acknowledges “these changes will require developers to update the way in which their extensions operate.”

Yet Google’s related web technology proposal two weeks ago to build a “privacy sandbox,” through a series of new technical specifications that would hinder anti-tracking mechanisms, has been dismissed as disingenuous “privacy gaslighting.”

On Friday, EFF staff technologist Bennett Cyphers, lambasted the ad biz for its self-serving specs. “Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies – by far the most common tracking technology on the Web, and Google’s tracking method of choice – will hurt user privacy,” he wrote in a blog post.

Source: Mozilla says Firefox won’t defang ad blockers – unlike a certain ad-giant browser • The Register

REVEALED: Hundreds of words to avoid using online if you don’t want the government spying on you

The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media for signs of terrorist or other threats against the U.S.

The intriguing the list includes obvious choices such as ‘attack’, ‘Al Qaeda’, ‘terrorism’ and ‘dirty bomb’ alongside dozens of seemingly innocent words like ‘pork’, ‘cloud’, ‘team’ and ‘Mexico’.

Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats.

The words are included in the department’s 2011 Analyst’s Desktop Binder‘ used by workers at their National Operations Center which instructs workers to identify ‘media reports that reflect adversely on DHS and response activities’.

Department chiefs were forced to release the manual following a House hearing over documents obtained through a Freedom of Information Act lawsuit which revealed how analysts monitor social networks and media organisations for comments that ‘reflect adversely’ on the government.

However they insisted the practice was aimed not at policing the internet for disparaging remarks about the government and signs of general dissent, but to provide awareness of any potential threats.

As well as terrorism, analysts are instructed to search for evidence of unfolding natural disasters, public health threats and serious crimes such as mall/school shootings, major drug busts, illegal immigrant busts.

The list has been posted online by the Electronic Privacy Information Center – a privacy watchdog group who filed a request under the Freedom of Information Act before suing to obtain the release of the documents.

In a letter to the House Homeland Security Subcommittee on Counter-terrorism and Intelligence, the centre described the choice of words as ‘broad, vague and ambiguous’.

Threat detection: Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats

They point out that it includes ‘vast amounts of First Amendment protected speech that is entirely unrelated to the Department of Homeland Security mission to protect the public against terrorism and disasters.’

A senior Homeland Security official told the Huffington Post that the manual ‘is a starting point, not the endgame’ in maintaining situational awareness of natural and man-made threats and denied that the government was monitoring signs of dissent.

However the agency admitted that the language used was vague and in need of updating.

Spokesman Matthew Chandler told website: ‘To ensure clarity, as part of … routine compliance review, DHS will review the language contained in all materials to clearly and accurately convey the parameters and intention of the program.’

MIND YOUR LANGUAGE: THE LIST OF KEYWORDS IN FULL

List1

List

list3

Source: REVEALED: Hundreds of words to avoid using online if you don’t want the government spying on you | Daily Mail Online

Basically you’re being censored through the use of unnecessary, ubiquitous surveillance – by a democracy.

Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

The CEO of an energy firm based in the UK thought he was following his boss’s urgent orders in March when he transferred funds to a third-party. But the request actually came from the AI-assisted voice of a fraudster.

The Wall Street Journal reports that the mark believed he was speaking to the CEO of his businesses’ parent company based in Germany. The German-accented caller told him to send €220,000 ($243,000 USD) to a Hungarian supplier within the hour. The firm’s insurance company, Euler Hermes Group SA, shared information about the crime with WSJ but would not reveal the name of the targeted businesses.

Euler Hermes fraud expert Rüdiger Kirsch told WSJ that the victim recognized his superior’s voice because it had a hint of a German accent and the same “melody.” This was reportedly the first time Euler Hermes has dealt with clients being affected by crimes that used AI mimicry.

Source: Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

A way to repair tooth enamel

A team of researchers from Zhejiang University and Xiamen University has found a way to repair human tooth enamel. In their paper published in the journal Science Advances, the group describes their process and how well it worked when tested.

[…]

the researchers first created extremely tiny (1.5-nanometer diameter) clusters of calcium phosphate, the main ingredient of natural enamel. Each of the tiny clusters was then prepared with the triethylamine—doing so prevented the clusters from clumping together. The clusters were then mixed with a gel that was applied to a sample of crystalline hydroxyapatite—a material very similar to human enamel. Testing showed that the clusters fused with the stand-in, and in doing so, created a layer that covered the sample. They further report that the layer was much more tightly arranged than prior teams had achieved with similar work. They claim that such tightness allowed the new material to fuse with the old as a single layer, rather than multiple crystalline areas.

The team then carried out the same type of testing using real human teeth that had been treated with acid to remove the enamel. They report that within 48 hours of application, crystalline layers of approximately 2.7 micrometers had formed on the teeth. Close examination with a microscope showed that the had a fish-scale like structure very similar to that of natural enamel. Physical testing showed the enamel to be nearly identical to natural in strength and wear resistance.

The researchers note that more work is required before their technique can be used by dentists—primarily to make sure that it does not have any undesirable side effects.

Source: A way to repair tooth enamel

ESA satellite dodges a “mega constellation” – Musks cluster satellites

The European Space Agency (ESA) accomplished a first today: moving one of its satellites away from a potential collision with a “mega constellation”.

The constellation in question was SpaceX’s Starlink, and the firing of the thrusters of the Aeolus Earth observation satellite was designed to raise the orbit of the spacecraft to allow SpaceX’s satellite to pass beneath without risking a space slam.

The ESA operations team confirmed that this morning’s manoeuvre took place approximately half an orbit before the potential pileup. It also warned that, with further Starlink satellites in the pipeline and other constellations from the likes of Amazon due to launch, performing such moves manually would soon become impossible.

If plans to orbit thousands more satellites (to bring broadband to remote areas, or inflict it on air-travellers, for example) come to fruition, the ESA team reckons that things will need to be a lot more automated. Acronyms such as AI have been bandied around to create debris and constellation avoidance systems that move faster than the current human-based approach.

We contacted SpaceX to get its take on ESA’s antics, but nothing has yet emerged from Musk’s media orifice. If it does, we will update this article accordingly.

While this is a first for a “mega constellation”, ESA is well practiced at dodging satellites, although mostly dead ones (or debris.) In 2018, the boffins keeping track of things had to perform 28 manoeuvres. A swerve to miss an active spacecraft is, however, unusual.

Aeolus itself was launched on 22 August 2018, and is designed to acquire profiles of the Earth’s winds, handy for understanding the dynamics of weather and improving forecasting.

You can make your own joke about nervous squeaks of flatulence as scientists realised that the spacecraft, designed to spend just over three years in orbit, was headed toward a possible mash-up with one of Musk’s finest.

The incident serves as a timely reminder of the risks of flinging up thousands of small satellites to blanket the Earth with all manner of services. Keeping the things out of the way of each other and those spacecraft with more scientific goals will be an ever increasing challenge if the plans of Musk et al become a reality.

Source: Everyone remembers their first time: ESA satellite dodges a “mega constellation” • The Register

up to 2% of all Apple iPhones Hacked, says Google, and Breaks ALL messaging Encryption as well as sending location data

The potential impact of the latest attack on iPhones is massive, not to mention hugely concerning for every user of Apple’s famous smartphone.

That simply visiting a website can lead to your iPhone being hacked silently by some unknown party is worrying enough. But given that, according to Google researchers, it’s possible for the hackers to access encrypted messages on WhatsApp, iMessage, Telegram and others, the attacks undermine the security promised by those apps. It’s a stark reminder that should Apple’s iOS be compromised by hidden malware, encryption can be entirely undone. Own the operating system, own everything inside.

Among the trove of data released by Google researcher Ian Beer on the attacks was detail on the “monitoring implant” hackers installed on the iPhone. He noted that it had access to all the database files on the victim’s phone used by those end-to-end encrypted apps. Those databases “contain the unencrypted, plain-text of the messages sent and received using the apps.”

Today In: Innovation

The implant would also enable hackers to snoop on Gmail and Google Hangouts, contacts and photos. The hackers could also watch where users were going with a live GPS location tracker. And the malware stole the “keychain” where passwords, such as those for all remembered Wi-Fi points, are stored.

Shockingly, according to Beer, the hackers didn’t even bother encrypting the data they were stealing, making a further mockery of encrypted apps. “Everything is in the clear. If you’re connected to an unencrypted Wi-Fi network, this information is being broadcast to everyone around you, to your network operator and any intermediate network hops to the command and control server,” the Google researcher wrote. “This means that not only is the end-point of the end-to-end encryption offered by messaging apps compromised; the attackers then send all the contents of the end-to-end encrypted messages in plain text over the network to their server.”

Beer’s ultimate assessment is sobering: “The implant has access to almost all of the personal information available on the device, which it is able to upload, unencrypted, to the attacker’s server.”

And, Beer added, even once the iPhone has been cleaned of infection (which would happen on a device restart or with the patch applied), the information the hackers pilfered could be used to maintain access to people’s accounts. “Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device.

Iphone users should upgrade to the latest iOS as soon as they can to get a patch for the flaw, which was fixed earlier this year. Apple did not comment.

[…]

Avraham said he’d analyzed many cases of attacks on iPhones and iPads. He said he wouldn’t be surprised if the number of remotely infected iOS devices was anywhere between 0.1% and 2% of all 1 billion iPhones in use. That’d be either 1 million or 20 million.

“The only way to fight back is to patch vulnerabilities used as part of exploit chains while strategic mitigations are developed. This cannot be done effectively solely by Apple without the help of the security community,” Avraham added.

“Unfortunately the security community cannot help much due to Apple’s own restrictions. The current sandbox policies do not allow security analysts to extract malware from the device even if the device is compromised.”

Source: Apple iPhone Hack Exposed By Google Breaks WhatsApp Encryption

Some of The World’s Most-Cited Scientists Have Been Citing Themselves Through Citation Farms

A new study has revealed an unsettling truth about the citation metrics that are commonly used to gauge scientists’ level of impact and influence in their respective fields of research.

Citation metrics indicate how often a scientist’s research output is formally referenced by colleagues in the footnotes of their own papers – but a comprehensive analysis of this web of linkage shows the system is compromised by a hidden pattern of behaviour that often goes unnoticed.

Specifically, among the 100,000 most cited scientists between 1996 to 2017, there’s a stealthy pocket of researchers who represent “extreme self-citations and ‘citation farms’ (relatively small clusters of authors massively citing each other’s papers),” explain the authors of the new study, led by physician turned meta-researcher John Ioannidis from Stanford University.

[…]

Among the 100,000 most highly cited scientists for the period of 1996 to 2017, over 1,000 researchers self-cited more than 40 percent of their total citations – and over 8,500 researchers had greater than 25 percent self-citations.

There’s no suggestion that any of these self-citations are necessarily or automatically unethical or unwarranted or self-serving in themselves. After all, in some cases, your own published scientific research may be the best and most relevant source to link to.

But the researchers behind the study nonetheless suggest that the prevalence of extreme cases revealed in their analysis debases the value of citation metrics as a whole – which are often used as a proxy of a scientist’s standing and output quality (not to mention employability).

“With very high proportions of self-citations, we would advise against using any citation metrics since extreme rates of self-citation may herald also other spurious features,” the authors write.

“These need to be examined on a case-by-case basis for each author, and simply removing the self-citations may not suffice.”

[…]

“When we link professional advancement and pay attention too strongly to citation-based metrics, we incentivise self-citation,” psychologist Sanjay Srivastava from the University of Oregon, who wasn’t involved in the study, told Nature.

“Ultimately, the solution needs to be to realign professional evaluation with expert peer judgement, not to double down on metrics.”

The findings are reported in PLOS Biology.

Source: Some of The World’s Most-Cited Scientists Have a Secret That’s Just Been Exposed

Don’t fly with your Explody MacBook!

Following an Apple notice that a “limited number” of 15-inch MacBook Pros may have faulty batteries that could potentially create a fire safety risk, multiple airlines have barred transporting Apple laptops in their checked luggage—in some cases, regardless of whether they fall under the recall.

Bloomberg reported Wednesday that Qantas Airways and Virgin Australia had joined the growing list of airlines enforcing policies around the MacBook Pros. In a statement by email, a spokesperson for Qantas told Gizmodo that “[u]ntil further notice, all 15 inch Apple MacBook Pros must be carried in cabin baggage and switched off for flight following a recall notice issued by Apple.”

Virgin Australia, meanwhile, said in a “Dangerous Goods” notice on its website that any MacBook model “must be placed in carry-on baggage only. No Apple MacBooks are permitted in checked in baggage until further notice.”

Apple in June announced a voluntary recall program for the affected models of 15-inch Retina display MacBook Pro, which it said were sold between September 2015 and February 2017. Apple said at the time it would fix affected models for free, adding that “[c]ustomer safety is always Apple’s top priority.”

Apple did not immediately return a request for comment about airline policies implemented in response to the recall.

Both Singapore Airlines and Thai Airways also recently instituted policies around the MacBook Pros. In a statement on its website over the weekend, Singapore Airlines said that passengers are prohibited from bringing affected models on its aircraft either in their carry-ons or in their checked luggage “until the battery has been verified as safe or replaced by the manufacturer.”

Bloomberg previously reported that airlines TUI Group Airlines, Thomas Cook Airlines, Air Italy, and Air Transat also introduced bans on the laptops. The cargo activity of all four is managed by Total Cargo Expertise, which reportedly said in an internal notice to its staff that the affected devices are “prohibited on board any of our mandate carriers.”

Both the Federal Aviation Administration and European Union Aviation Safety Agency said they had contacted airlines following Apple’s announcement regarding the recall. The FAA said that it alerted U.S. carriers to the issue in July.

Apple allows MacBook users to see if their devices are affected by inputting a serial number. While checking individual serial numbers for each and every device that comes through security checkpoints has the potential to slow service, banning all MacBooks either outright or in the cabin seems like a severe overreaction and, to be honest, a gigantic pain in the ass for customers.

Source: Airlines Are Banning MacBooks From Checked Luggage

I’d say removing macbooks from check in luggage and then looking if the serials are OK or not will take a stupid amount of time. Banning them from check in luggage makes perfect sense.

MIT Researchers Build Functional Carbon Nanotube Microprocessor

Scientists at MIT built a 16-bit microprocessor out of carbon nanotubes and even ran a program on it, a new paper reports.

Silicon-based computer processors seem to be approaching a limit to how small they can be scaled, so researchers are looking for other materials that might make for useful processors. It appears that transistors made from tubes of rolled-up, single-atom-thick sheets of carbon, called carbon nanotubes, could one day have more computational power while requiring less energy than silicon.

[…]

the MIT group, led by Gage Hills and Christian Lau, has now debuted a functional 16-bit processor called RV16X-NANO that uses carbon nanotubes, rather than silicon, for its transistors. The processor was constructed using the same industry-standard processes behind silicon chips—Shulaker explained that it’s basically just a silicon microprocessor with carbon nanotubes instead of silicon.

The processor works well enough to run HELLO WORLD, a program that simply outputs the phrase “HELLO WORLD” and is the first program that most coding students learn. Shulaker compared its performance to a processor you’d buy at hobby shop to control a small robot.

[…]

A small but notable fraction of carbon nanotubes act like conductors instead of semiconductors. Shulaker explained that study author Hills devised a technique called DREAM, where the circuits were specifically designed to work despite the presence of metallic nanotubes. And of course, the effort relied on the contribution of every member of the relatively small team. The researchers published their results in the journal Nature today.

[…]

Ultimately, the goal isn’t to erase the decades of progress made by silicon microchips—perhaps companies can integrate carbon nanotube pieces into existing architectures.

This is still a proof-of-concept. The team still hasn’t calculated the chip’s performance or whether it’s actually more energy efficient than silicon—the gains are based on projections. But Shulaker hopes that the team’s work will serve as a roadmap toward incorporating carbon nanotubes in computers for the future.

Source: MIT Researchers Build Functional Carbon Nanotube Microprocessor

MIT Researchers Design Robotic Thread that navigates Human Brains to clear clots

Robotics engineers at MIT have built a threadlike robot worm that can be magnetically steered to deftly navigate the extremely narrow and winding arterial pathways of the human brain. One day it could be used to quickly clear blockages and clots that contribute to strokes and aneurysms

[…]

Strokes are a leading cause of death and disability in the United States, but relieving blood vessel blockages within the first 90 minutes of treatment has been found to dramatically increase survival rates of patients. The process is a complicated one, however, requiring skilled surgeons to manually guide a thin wire through a patient’s arteries up into a damaged brain vessel followed by a catheter that can deliver treatments or simply retrieve a clot. Not only is there the potential for these wires to damage vessel linings as they inch through the body, but during the process, surgeons are exposed to excess radiation from a fluoroscope which guides them by generating x-ray images in real-time. There’s a lot of room for improvement.

Using their expertise in both water-based biocompatible hydrogels, and the use of magnets to manipulate simple machines, the MIT engineers created a robotic worm featuring a pliable nickel-titanium alloy core with memory shape characteristics so that when bent it returns to its original shape. The core was then coated in a rubbery paste that was embedded with magnetic particles, which was then wrapped in an outer coating of hydrogels allowing the robotic worm to glide through arteries and blood vessels without any friction that could potentially cause damage.

The robot was tested on a small obstacle course featuring a twisting path of small rings guided by a strong magnet that could be operated at enough distance to be placed outside a patient. The engineers also mocked up a life-size replica of a brain’s blood vessels and found that not only could the robot easily navigate that obstacle but that there was also the potential to upgrade it with additional tools like a delivery mechanism for clot reducing drugs. They even successfully replaced the worm’s metal core with an optical cable, so that once it reached its destination, it could deliver powerful laser pulses to help remove a blockage.

The robot would not only make the post-stroke procedure faster and faster, but it would also reduce the exposure to radiation that surgeons often have to endure. And while it was tested using a manually operated magnet to steer it, eventually machines could be built to control the position of the magnet (MRI machines already surround patients in intense magnetic fields) with improved accuracy, which would in turn further improve and accelerate the robot’s journey through a patient’s body.

Source: MIT Researchers Designed this Robotic Worm to Burrow Into Human Brains

A bit unsure why the original article is so down on the concept and wants to frame it negatively, but oh well.

Irish Teen Wins 2019 Google Science Fair For Removing Microplastics From Water

An Irish teenager just won $50,000 for his project focusing on extracting micros-plastics from water.

Google launched the Google Science Fair in 2011 where students ages 13 through 18 can submit experiments and their results in front of a panel of judges. The winner receives $50,000. The competition is also sponsored by Lego, Virgin Galactic, National Geographic and Scientific American.

Fionn Ferreira, an 18-year-old from West Cork, Ireland won the competition for his methodology to remove microplastics from water.

Microplastics are defined as having a diameter of 5nm or less and are too small for filtering or screening during wastewater treatment. Microplastics are often included in soaps, shower gels, and facial scrubs for their ability to exfoliate the skin. Microplastics can also come off clothing during normal washing.

These microplastics then make their way into waterways and are virtually impossible to remove through filtration. Small fish are known to eat microplastics and as larger fish eat smaller fish these microplastics are concentrated into larger fish species that humans consume.

Ferreira used a combination of oil and magnetite powder to create a ferrofluid in the water containing microplastics. The microplastics combined with the ferrofluid which was then extracted.

After the microplastics bound to the ferrofluid, Ferreira used a magnet to remove the solution and leave only water.

After 1,000 tests, the method was 87% effective in removing microplastics of all sorts from water. The most effective microplastic removed was that from a washing machine with the hardest to remove being polypropylene plastics.

With the confirmation of the methodology, Ferreira hopes to scale the technology to be able to implement at wastewater treatment facilities.

This would prevent the microplastics from ever reaching waterways and the ocean. While reduction in the use of microplastics is the ideal scenario, this methodology presents a new opportunity to screen for microplastics before they are consumed as food by fish.

At 18 Ferreira has an impressive array of accomplishments. He is the curator at the Schull Planetarium, speaks 3 languages fluently, won 12 previous science fair competitions, plays the trumpet in an orchestra and has a minor planet named after him by MIT.

Source: Irish Teen Wins 2019 Google Science Fair For Removing Microplastics From Water

Electric Dump Truck Produces More Energy Than It Uses

Electric vehicles are everywhere now. It’s more than just Leafs, Teslas, and a wide variety of electric bikes. It’s also trains, busses, and in this case, gigantic dump trucks. This truck in particular is being put to work at a mine in Switzerland, and as a consequence of having an electric drivetrain is actually able to produce more power than it consumes. (Google Translate from Portugese)

This isn’t some impossible perpetual motion machine, either. The dump truck drives up a mountain with no load, and carries double the weight back down the mountain after getting loaded up with lime and marl to deliver to a cement plant. Since electric vehicles can recover energy through regenerative braking, rather than wasting that energy as heat in a traditional braking system, the extra weight on the way down actually delivers more energy to the batteries than the truck used on the way up the mountain.

The article claims that this is the largest electric vehicle in the world at 110 tons, and although we were not able to find anything larger except the occasional electric train, this is still an impressive feat of engineering that shows that electric vehicles have a lot more utility than novelties or simple passenger vehicles.

Source: Electric Dump Truck Produces More Energy Than It Uses | Hackaday

IBM open sources Adverserial Robustness 360 toolbox for AI

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. ART provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

Documentation for ART: https://adversarial-robustness-toolbox.readthedocs.io

https://github.com/IBM/adversarial-robustness-toolbox

IBM releases AI Fairness 360 tool open source

The AI Fairness 360 toolkit is an open-source library to help detect and remove bias in machine learning models. The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.

https://github.com/IBM/AIF360

IBM releases AI Explainability tools

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.

The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some guidance material and a chart that can be consulted.

Github link

ITER is making a mini sun to power the earth

In southern France, 35 nations are collaborating to build the world’s largest tokamak, a magnetic fusion device that has been designed to prove the feasibility of fusion as a large-scale and carbon-free source of energy based on the same principle that powers our Sun and stars.
The experimental campaign that will be carried out at ITER is crucial to advancing fusion science and preparing the way for the fusion power plants of tomorrow.
ITER will be the first fusion device to produce net energy. ITER will be the first fusion device to maintain fusion for long periods of time. And ITER will be the first fusion device to test the integrated technologies, materials, and physics regimes necessary for the commercial production of fusion-based electricity.
Thousands of engineers and scientists have contributed to the design of ITER since the idea for an international joint experiment in fusion was first launched in 1985. The ITER Members—China, the European Union, India, Japan, Korea, Russia and the United States—are now engaged in a 35-year collaboration to build and operate the ITER experimental device, and together bring fusion to the point where a demonstration fusion reactor can be designed.
[…]
Three conditions must be fulfilled to achieve fusion in a laboratory: very high temperature (on the order of 150,000,000° Celsius); sufficient plasma particle density (to increase the likelihood that collisions do occur); and sufficient confinement time (to hold the plasma, which has a propensity to expand, within a defined volume).


At extreme temperatures, electrons are separated from nuclei and a gas becomes a plasma—often referred to as the fourth state of matter. Fusion plasmas provide the environment in which light elements can fuse and yield energy.


In a tokamak device, powerful magnetic fields are used to confine and control the plasma.

[…]

The tokamak is an experimental machine designed to harness the energy of fusion. Inside a tokamak, the energy produced through the fusion of atoms is absorbed as heat in the walls of the vessel. Just like a conventional power plant, a fusion power plant will use this heat to produce steam and then electricity by way of turbines and generators.

The heart of a tokamak is its doughnut-shaped vacuum chamber. Inside, under the influence of extreme heat and pressure, gaseous hydrogen fuel becomes a plasma—the very environment in which hydrogen atoms can be brought to fuse and yield energy. (You can read more on this particular state of matter here.) The charged particles of the plasma can be shaped and controlled by the massive magnetic coils placed around the vessel; physicists use this important property to confine the hot plasma away from the vessel walls. The term “tokamak” comes to us from a Russian acronym that stands for “toroidal chamber with magnetic coils.”

First developed by Soviet research in the late 1960s, the tokamak has been adopted around the world as the most promising configuration of magnetic fusion device. ITER will be the world’s largest tokamak—twice the size of the largest machine currently in operation, with ten times the plasma chamber volume.

[…]
Taken together, the ITER Members represent three continents, over 40 languages, half of the world’s population and 85 percent of global gross domestic product. In the offices of the ITER Organization and those of the seven Domestic Agencies, in laboratories and in industry, literally thousands of people are working toward the success of ITER.
[…]
ITER’s First Plasma is scheduled for December 2025.


That will be the first time the machine is powered on, and the first act of ITER’s multi-decade operational program.


On a cleared, 42-hectare site in the south of France, building has been underway since 2010. The ground support structure and the seismic foundations of the ITER Tokamak are in place and work is underway on the Tokamak Complex—a suite of three buildings that will house the fusion experiments. Auxiliary plant buildings such as the ITER cryoplant, the radio frequency heating building, and facilities for cooling water, power conversion, and power supply are taking shape all around the central construction site.

[…]

ITER Timeline


2005
Decision to site the project in France
2006
Signature of the ITER Agreement
2007
Formal creation of the ITER Organization
2007-2009
Land clearing and levelling
2010-2014
Ground support structure and seismic foundations for the Tokamak
2012
Nuclear licensing milestone: ITER becomes a Basic Nuclear Installation under French law

2014-2021
Construction of the Tokamak Building (access for assembly activities in 2019)
2010-2021
Construction of the ITER plant and auxiliary buildings for First Plasma
2008-2021
Manufacturing of principal First Plasma components
2015-2023
Largest components are transported along the ITER Itinerary

2020-2025
Main assembly phase I
2022
Torus completion
2024
Cryostat closure
2024-2025
Integrated commissioning phase (commissioning by system starts several years earlier)
Dec 2025
First Plasma
2026
Begin installation of in-vessel components
2035
Deuterium-Tritium Operation begins

Throughout the ITER construction phase, the Council will closely monitor the performance of the ITER Organization and the Domestic Agencies through a series of high-level project milestones. See the Milestones page for a series of incremental milestones on the way to First Plasma.

Source: What is ITER?

From the FAQ: The EU seems to be paying $17bn (and is responsible for almost half the project costs). There is around $1bn in deactivation and decomissioning costs, making the total around $35bn – as far as they can figure out. That’s a staggering science project!

Lenovo Solution Centre can turn users into Admins – Lenovo changes end of life for LSC until before the last release in response.

Not only has a vulnerability been found in Lenovo Solution Centre (LSC), but the laptop maker fiddled with end-of-life dates to make it seem less important – and is now telling the world it EOL’d the vulnerable monitoring software before its final version was released.

The LSC privilege-escalation vuln (CVE-2019-6177) was found by Pen Test Partners (PTP), which said it has existed in the code since it first began shipping in 2011. It was bundled with the vast majority of the Chinese manufacturer’s laptops and other devices, and requires Windows to run. If you removed the app, or blew it away with a Linux install, say, you’re safe right now.

[…]

he solution? Uninstall Lenovo Solution Centre, and if you’re really keen you can install Lenovo Vantage and/or Lenovo Diagnostics to retain the same branded functionality, albeit without the priv-esc part.

All straightforward. However, it went a bit awry when PTP reported the vuln to Lenovo. “We noticed they had changed the end-of-life date to make it look like it went end of life even before the last version was released,” they told us.

Screenshots of the end-of-life dates – initially 30 November 2018, and then suddenly April 2018 after the bug was disclosed – can be seen on the PTP blog. The last official release of the software is dated October 2018, so Lenovo appears to have moved the EOL date back to April of that year for some reason.

Source: Security gone in 600 seconds: Make-me-admin hole found in Lenovo Windows laptop crapware. Delete it now • The Register

Why do tech companies file so many weird patents?

There are lots of reasons to patent something. The most obvious one is that you’ve come up with a brilliant invention, and you want to protect your idea so that nobody can steal it from you. But that’s just the tip of the patent strategy iceberg. It turns out there is a whole host of strategies that lead to “zany” or “weird” patent filings, and understanding them offers a window not just into the labyrinthine world of the U.S. Patent and Trademark Office and its potential failings, but also into how companies think about the future. And while it might be fun to gawk at, say, Motorola patenting a lie-detecting throat tattoo, it’s also important to see through the eye-catching headlines and to the bigger issue here: Patents can be weapons and signals. They can spur innovation, as well as crush it.

Let’s start with the anatomy of a patent. Patents have many elements—the abstract, a summary, a background section, illustrations, and a section called “claims.” It’s crucial to know that the thing that matters most in a patent isn’t the abstract, or the title, or the illustrations. It’s the claims, where the patent filer has to list all the new, innovative things that her patent does and why she in fact deserves government protection for her idea. It’s the claims that matter over everything else.

[…]

For a long time, companies didn’t really worry about the PR that patents might generate. Mostly because nobody was looking. But now, journalists are using patents as a window into a company’s psyche, and not always in a way that makes these companies look good.

So why patent something that could get you raked across the internet coals? In many cases, when a company files for a patent, it has no idea whether it’s actually going to use the invention. Often, patents are filed as early as possible in an idea’s life span. Which means that at the moment of filing, nobody really knows where a field might go or what the market might be for something. So companies will patent as many ideas as they can at the early stages and then pick and choose which ones actually make sense for their business as time goes by.

[…]

In some situations, companies file for patents to blanket the field—like dogs peeing on every bush just in case. Many patents are defensive, a way to keep your competitors from developing something more than a way to make sure you can develop that thing. Will Amazon ever make a delivery blimp? Probably not, but now none of its competitors can. (Amazon seems to be a leader in these patent oddities. Its portfolio also includes a flying warehouse, self-destructing drones, an underwater warehouse, and a drone tunnel.

[…]

David Stein, a patent attorney, says that he sees this at companies he works with. He tells me that once he was in a meeting with inventors about something they wanted to patent, and he asked one of his standard questions to help him prepare the patent: What products will this invention go into? “And they said, ‘Oh, it won’t.’ ” The team that had invented this thing had been disbanded, and the company had moved to a different solution. But they had gone far enough with the patent application that they might as well keep going, if only to use the patent in the future to keep their competitors from gaining an advantage. (It’s almost impossible to know how many patents wind up being “useful” to a company or turn up in actual products.)

As long as you have a budget for it (and patents aren’t cheap—filing for one can easily cost more than $10,000 all told), there’s an incentive for companies to amass as many as they can. Any reporter can tell you that companies love to boast about the number of patents they have, as if it’s some kind of quantitative measure of brilliance. (This makes about as much sense as boasting about how many lines of code you’ve written—it doesn’t really matter how much you’ve got, it matters if it actually works.) “The number of patents a company is filing has more to do with the patent budget than with the amount they’re actually investing in research,” says Lisa Larrimore Ouellette, a professor at Stanford Law School

[…]

This patent arm wrestling doesn’t just provide low-hanging fruit to reporters. It also affects business dealings. Let’s say you have two companies that want to make some kind of business deal, Charles Duan, a patent expert at the R Street Institute, says. One of their key negotiation points might be patents. If two giant companies want to cut a deal that involves their patent portfolios, nobody is going to go through and analyze every one of those patents to make sure they’re actually useful or original, Duan says, since analyzing a single patent thoroughly can cost thousands of dollars in legal fees. So instead of actually figuring out who has the more valuable patents, “the [company] with more patents ends up getting more. I’m not sure there’s honestly much more to it.”

Several people I spoke with for this story described patent strategy as “an arms race” in which businesses all want to amass as many patents as they can to protect themselves and bolster their position in these negotiations. “There’s not that many companies that are willing to engage in unilateral disarmament,”

[…]

While disarmament might be unlikely, many companies have chosen not to engage in the patent warfare at all. In fact, companies often don’t patent technologies they’re most interested in. A patent necessarily lays out how your product works, information that not all companies want to divulge. “We have essentially no patents in SpaceX,” Elon Musk told Chris Anderson at Wired. “Our primary long-term competition is in China. If we published patents, it would be farcical, because the Chinese would just use them as a recipe book.”

[…]

In most cases, once the inventors and engineers hand over their ideas and answer some questions, it’s the lawyer’s job to build those things out into an actual patent. And here is where a lot of the weirdness actually enters the picture, because the lawyer essentially has to get creative. “You dress up science fiction with words like ‘means for processing’ or ‘data storage device,’ ” says Mullin.

Even the actual language of the patents themselves can be misleading. It turns you actually can write fan fiction about your own invention in a patent. Patent applications can include what are called “prophetic examples,” which are descriptions of how the patent might work and how you might test it. Those prophetic examples can be as specific as you want, despite being completely fictional. Patents can legally describe a “46-year-old woman” who never existed and say that her “blood pressure is reduced within three hours” when that never actually happened. The only rule about prophetic examples is that they cannot be written in the past tense. Which means that when you’re reading a patent, the examples written in the present tense could be real or completely made up. There’s no way to know.

If this sounds confusing, it is, and not just to journalists trying to wade through these documents. Ouellette, who published a paper in Science about this problem recently, admitted that even she wouldn’t necessarily be able to tell whether experiments described in a patent had actually been conducted.

Some people might argue that these kinds of speculative patents are harmless fun, the result of a Kafkaesque kaleidoscope of capitalism, competition, and bureaucracy. But it’s worth thinking about how they can be misused, says Mullin. Companies that are issued vague patents can go after smaller entities and try to extract money from them. “It’s like beating your competitor over the head with a piece of science fiction you wrote,” he says.

Plus, everyday people can be misled about just how much to trust a company based on its patents. One study found that out of 100 patents cited in scientific articles or books that used only prophetic examples (in other words, had no actual data or evidence in them), 99 were inaccurately described as having been based on real data.

[…]

Stein says that recently he’s had companies bail on patents because they might be perceived as creepy. In fact, in one case, Stein says that the company even refiled a patent to avoid a PR headache.* As distrust of technology corporations mounts, the way we read patents has changed. “Everybody involved in the patent process is a technologist. … We don’t tend to step back and think, this could be perceived as something else by people who don’t trust us.” But people are increasingly unwilling to give massive tech companies the benefit of the doubt. This is why Google’s patent for a “Gaze tracking system” got pushback—do you really want Google to know exactly what you look at and for how long?

[…]

there is still real value in reading the patents that companies apply for—not because doing so will necessarily tell you what they’re actually going to make, but because they tell you what problems the company is trying to solve. “They’re indicative of what’s on the engineer’s mind,” says Duan. “They’re not going to make the cage, but it does tell you that they’re worried about worker safety.” Spotify probably won’t make its automatic parking finder, so you don’t have to pause your music in a parking garage while you hunt for a spot. But it does want to figure out how to reduce interruptions in your music consumption. So go forth and read patents. Just remember that they’re often equal parts real invention and sci-fi.

Source: Why do tech companies file so many weird patents?

That science fiction concepts can be patented is new for me. So you can whack companies around with patents that you thought of but didn’t implement. Sounds like a really good idea. Not.

Complex quantum teleportation achieved for the first time

Researchers from the Austrian Academy of Sciences and the University of Vienna have experimentally demonstrated what was previously only a theoretical possibility. Together with quantum physicists from the University of Science and Technology of China, they have succeeded in teleporting complex high-dimensional quantum states. The research teams report this international first in the journal Physical Review Letters.

In their study, the researchers teleported the of one photon (light particle) to another distant one. Previously, only two-level states (“qubits”) had been transmitted, i.e., information with values “0” or “1”. However, the scientists succeeded in teleporting a three-level state, a so-called “qutrit”. In , unlike in classical computer science, “0” and “1” are not an ‘either/or’ – both simultaneously, or anything in between, is also possible. The Austrian-Chinese team has now demonstrated this in practice with a third possibility “2”.

[…]

The quantum state to be teleported is encoded in the possible paths a photon can take. One can picture these paths as three optical fibers. Most interestingly, in quantum physics a single photon can also be located in all three optical fibers at the same time. To teleport this three-dimensional quantum state, the researchers used a new experimental method. The core of quantum teleportation is the so-called Bell measurement. It is based on a multiport beam splitter, which directs photons through several inputs and outputs and connects all optical fibers together. In addition, the scientists used auxiliary photons—these are also sent into the multiple beam splitter and can interfere with the other photons.

Through clever selection of certain interference patterns, the quantum information can be transferred to another photon far from the input photon, without the two ever physically interacting. The experimental concept is not limited to three dimensions, but can in principle be extended to any number of dimensions, as Erhard emphasizes.

Higher information capacities for quantum computers

With this, the international research team has also made an important step towards practical applications such as a future quantum internet, since high-dimensional quantum systems can transport larger amounts of information than qubits. “This result could help to connect quantum computers with information capacities beyond qubits”, says Anton Zeilinger, quantum physicist at the Austrian Academy of Sciences and the University of Vienna, about the innovative potential of the new method.

[…]

In future work, the will focus on how to extend the newly gained knowledge to enable teleportation of the entire quantum state of a single or atom.

Source: Complex quantum teleportation achieved for the first time

Quantum radar has been demonstrated for  – MIT Technology Review

thanks to the work of Shabir Barzanjeh at the Institute of Science and Technology Austria and a few colleagues. This team has used entangled microwaves to create the world’s first quantum radar. Their device, which can detect objects at a distance using only a few photons, raises the prospect of stealthy radar systems that emit little detectable electromagnetic radiation.

The device is simple in essence. The researchers create pairs of entangled microwave photons using a superconducting device called a Josephson parametric converter. They beam the first photon, called the signal photon, toward the object of interest and listen for the reflection.

Quantum radar

In the meantime, they store the second photon, called the idler photon. When the reflection arrives, it interferes with this idler photon, creating a signature that reveals how far the signal photon has traveled. Voila—quantum radar!

This technique has some important advantages over conventional radar. Ordinary radar works in a similar way but fails at low power levels that involve small numbers of microwave photons. That’s because hot objects in the environment emit microwaves of their own.

In a room temperature environment, this amounts to a background of around 1,000 microwave photons at any instant, and these overwhelm the returning echo. This is why radar systems use powerful transmitters.

Entangled photons overcome this problem. The signal and idler photons are so similar that it is easy to filter out the effects of other photons. So it becomes straightforward to detect the signal photon when it returns.

Of course, entanglement is a fragile property of the quantum world, and the process of reflection destroys it.  Nevertheless, the correlation between the signal and idler photons is still strong enough to distinguish them from background noise.

[…]

A big advantage is the low levels of electromagnetic radiation required. “Our experiment shows the potential as a non-invasive scanning method for biomedical applications, e.g., for imaging of human tissues or non-destructive rotational spectroscopy of proteins,” say Barzanjeh and co.

Then there is the obvious application as a stealthy radar that is difficult for adversaries to detect over background noise. The researchers say it could be useful for short-range low-power radar for security applications in closed and populated environments.

Source: Quantum radar has been demonstrated for the first time – MIT Technology Review

Russia’s floating nuclear plant sails to its destination

Russia’s first floating nuclear power plant sailed Friday to its destination on the nation’s Arctic coast, a project that environmentalists have criticized as unsafe.

The Akademik Lomonosov is a 140-meter (459-foot) long towed platform that carries two 35-megawatt nuclear reactors. On Friday, it set out from the Arctic port of Murmansk on the Kola Peninsula on a three-week journey to Pevek on the Chukotka Peninsula more than 4,900 kilometers (about 2,650 nautical miles) east.

Its purpose is to provide power for the area, replacing the Bilibino nuclear power plant on Chukotka that is being decommissioned.

The Russian project is the first floating nuclear power plant since the U.S. MH-1A, a much smaller reactor that supplied the Panama Canal with power from 1968-1975.

Environmentalists have criticized the project as inherently dangerous and a threat to the pristine Arctic region. Russia’s state nuclear corporation Rosatom has dismissed those concerns, insisting that the floating nuclear plant is safe to operate.

Rosatom director, Alexei Likhachev, said his corporation hopes to sell floating reactors to foreign markets. Russian officials have previously mentioned Indonesia and Sudan among potential export customers.

Source: Russia’s floating nuclear plant sails to its destination

Scientists bioprint living tissue in a matter of seconds

Scientists at EPFL and University Medical Center Utrecht have developed an optical system that can bioprint complex, highly viable living tissue in “just a few seconds.” It would represent a breakthrough compared to the clunky, layer-based processes of today.

The approach, volumetric bioprinting, forms tissue by projecting a laser down a spinning tube containing hydrogel full of stem cells. You can shape the resulting tissue simply by focusing the laser’s energy on specific locations to solidify them, creating a useful 3D shape within seconds. After that, it’s a matter of introducing endothelial cells to add vessels to the tissue.

The resulting tissues are currently just a few inches across. That’s still enough to be “clinically useful,” EPFL said, and has already been used to print heart-like valves, a complex femur part and a meniscus. It can create interlocking structures, too.

While this definitely isn’t ready for real-world use, the applications are fairly self-evident. EPFL imagines a new wave of “personalized, functional” organs produced at “unprecedented speed.” This could be helpful for implants and repairs, and might greatly reduce the temptation to use animal testing — you’d just need to produce organs to simulate effects. This might be as much an ethics breakthrough as it is a technical one.

Source: Scientists bioprint living tissue in a matter of seconds

Uber And Lyft Take A Lot More From Drivers Than They Say

Ultimately, the rider paid $65 for the half-hour trip, according to a receipt viewed by Jalopnik. But Dave made only $15 (the fares have been rounded to anonymize the transaction).

Uber kept the rest, meaning the multibillion-dollar corporation kept more than 75 percent of the fare, more than triple the average so-called “take-rate” it claims in financial reports with the Securities and Exchange Commission.

Had he known in advance how much he would have been paid for the ride relative to what the rider paid, Dave said he never would have accepted the fare.

“This is robbery,” Dave told Jalopnik over email. “This business is out of control.”

Dave is far from alone in his frustrations. Uber and Lyft have slashed driver pay in recent years and now take a larger portion of each fare, far larger than the companies publicly report, based on data collected by Jalopnik. And the new Surge or Prime Time pricing structure widely adopted by both companies undermines a key legal argument both companies make to classify drivers as independent contractors.

Jalopnik asked drivers to send us fare receipts showing a breakdown of how much the rider paid for the trip, how much of that fare Uber or Lyft kept, and what the driver earned.

In total, we received 14,756 fares. These came from two sources: the web form where drivers could submit fares individually, and via email where some drivers sent us all their fares from a given time period.

Of all the fares Jalopnik examined, Uber kept 35 percent of the revenue, while Lyft kept 38 percent. These numbers are roughly in line with a previous study by Lawrence Mishel at the Economic Policy Institute which concluded Uber’s take rate to be roughly one-third, or 33 percent.

Of the drivers who emailed us breakdowns for all of their fares in a given time period—ranging from a few months to more than a year—Uber kept, on average, 29.6 percent. Lyft pocketed 34.5 percent.

Those take rates are 10.6 percent and 8.5 percent higher than Uber and Lyft’s publicly reported figures, respectively.

Graphic: Jim Cooke — G/O Media

In regulatory filings, Uber has reported its so-called “take-rate” is actually going down, from 21.7 percent in 2018 to 19 percent in the second quarter of 2019 (Uber declined to offer U.S.-only figures for a more direct comparison to Jalopnik’s findings).

Source: Uber And Lyft Take A Lot More From Drivers Than They Say

Johnson & Johnson Ordered to Pay $572 Million in Landmark Opioid Trial

A judge in Oklahoma on Monday ruled that Johnson & Johnson had intentionally played down the dangers and oversold the benefits of opioids, and ordered it to pay the state $572 million in the first trial of a drug manufacturer for the destruction wrought by prescription painkillers.

The amount fell far short of the $17 billion judgment that Oklahoma had sought to pay for addiction treatment, drug courts and other services it said it would need over the next 20 years to repair the damage done by the opioid epidemic.

Still, the decision, by Judge Thad Balkman of Cleveland County District Court, heartened lawyers representing states and cities — plaintiffs in many of the more than 2,000 opioid lawsuits pending across the country — who are pursuing a legal strategy similar to Oklahoma’s. His finding that Johnson & Johnson had breached the state’s “public nuisance” law was a significant aspect of his order.

Judge Balkman was harsh in his assessment of a company that has built its reputation as a responsible and family-friendly maker of soap, baby powder and Band-Aids.

In his ruling, he wrote that Johnson & Johnson had promulgated “false, misleading, and dangerous marketing campaigns” that had “caused exponentially increasing rates of addiction, overdose deaths” and babies born exposed to opioids.

Source: Johnson & Johnson Ordered to Pay $572 Million in Landmark Opioid Trial – The New York Times