The Linkielist

Linking ideas with the world

The Linkielist

AI has cracked a key mathematical puzzle for understanding our world – Partial Differential Equations

Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I’ve never used them since in the real world.

But partial differential equations, or PDEs, are also kind of magical. They’re a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.

The catch is PDEs are notoriously hard to solve. And here, the meaning of “solve” is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. “Solving” Navier-Stokes allows you to take a snapshot of the air’s motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It’s also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering.

Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That’s right. Bring it on.

Hammer time

Before we dive into how the researchers did this, let’s first appreciate the results. In the gif below, you can see an impressive demonstration. The first column shows two snapshots of a fluid’s motion; the second shows how the fluid continued to move in real life; and the third shows how the neural network predicted the fluid would move. It basically looks identical to the second.

The paper has gotten a lot of buzz on Twitter, and even a shout-out from rapper MC Hammer. Yes, really.

[…]

Neural networks are usually trained to approximate functions between inputs and outputs defined in Euclidean space, your classic graph with x, y, and z axes. But this time, the researchers decided to define the inputs and outputs in Fourier space, which is a special type of graph for plotting wave frequencies. The intuition that they drew upon from work in other fields is that something like the motion of air can actually be described as a combination of wave frequencies, says Anima Anandkumar, a Caltech professor who oversaw the research alongside her colleagues, professors Andrew Stuart and Kaushik Bhattacharya. The general direction of the wind at a macro level is like a low frequency with very long, lethargic waves, while the little eddies that form at the micro level are like high frequencies with very short and rapid ones.

Why does this matter? Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job. Cue major accuracy and efficiency gains: in addition to its huge speed advantage over traditional methods, their technique achieves a 30% lower error rate when solving Navier-Stokes than previous deep-learning methods.

[…]

Source: AI has cracked a key mathematical puzzle for understanding our world | MIT Technology Review

Unusual molecule found in atmosphere on Saturn’s moon Titan, precursor to life

Saturn’s largest moon, Titan, is the only moon in our solar system that has a thick atmosphere. It’s four times denser than Earth’s. And now, scientists have discovered a molecule in it that has never been found in any other atmosphere.

The particle is called cyclopropenylidene, or C3H2, and it’s made of carbon and hydrogen. This simple carbon-based molecule could be a precursor that contributes to chemical reactions that may create complex compounds. And those compounds could be the basis for potential life on Titan.
The molecule was first noticed as researchers used the Atacama Large Millimeter/submillimeter Array of telescopes in Chile. This radio telescope observatory captures a range of light signatures, which revealed the molecule among the unique chemistry of Titan’s atmosphere.
The study published earlier this month in the Astronomical Journal.
“When I realized I was looking at cyclopropenylidene, my first thought was, ‘Well, this is really unexpected,'” said lead study author Conor Nixon, planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, in a statement.
Cyclopropenylidene has been detected elsewhere across our galaxy, mainly in molecular clouds of gas and dust including the Taurus Molecular Cloud. This cloud, where stars are born, is located 400 light-years away in the Taurus constellation. In these clouds, temperatures are too cold for many chemical reactions to occur.
Cyclopropenylidene has now been detected only in the Taurus Molecular Cloud and in the atmosphere of Titan.

But finding it in an atmosphere is a different story. This molecule can react easily when it collides with others to form something new. The researchers were likely able to spot it because they were looking through the upper layers of Titan’s atmosphere, where the molecule has fewer gases it can interact with.
“Titan is unique in our solar system,” Nixon said. “It has proved to be a treasure trove of new molecules.”
Cyclopropenylidene is the second cyclic or closed-loop molecule detected at Titan; the first was benzene in 2003. Benzene is an organic chemical compound composed of carbon and hydrogen atoms. On Earth, benzene is found in crude oil, is used as an industrial chemical and occurs naturally in the wake of volcanoes and forest fires.
Cyclic molecules are crucial because they form the backbone rings for the nucleobases of DNA, according to NASA.
[…]

Source: Unusual molecule found in atmosphere on Saturn’s moon Titan – CNN

Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs

MIT researchers have now found that people who are asymptomatic may differ from healthy individuals in the way that they cough. These differences are not decipherable to the human ear. But it turns out that they can be picked up by artificial intelligence.

In a paper published recently in the IEEE Journal of Engineering in Medicine and Biology, the team reports on an AI model that distinguishes asymptomatic people from healthy individuals through forced-cough recordings, which people voluntarily submitted through web browsers and devices such as cellphones and laptops.

The researchers trained the model on tens of thousands of samples of coughs, as well as spoken words. When they fed the model new cough recordings, it accurately identified 98.5 percent of coughs from people who were confirmed to have Covid-19, including 100 percent of coughs from asymptomatics — who reported they did not have symptoms but had tested positive for the virus.

The team is working on incorporating the model into a user-friendly app, which if FDA-approved and adopted on a large scale could potentially be a free, convenient, noninvasive prescreening tool to identify people who are likely to be asymptomatic for Covid-19. A user could log in daily, cough into their phone, and instantly get information on whether they might be infected and therefore should confirm with a formal test.

“The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” says co-author Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory.

Subirana’s co-authors are Jordi Laguarta and Ferran Hueto, of MIT’s Auto-ID Laboratory.

Vocal sentiments

Prior to the pandemic’s onset, research groups already had been training algorithms on cellphone recordings of coughs to accurately diagnose conditions such as pneumonia and asthma. In similar fashion, the MIT team was developing AI models to analyze forced-cough recordings to see if they could detect signs of Alzheimer’s, a disease associated with not only memory decline but also neuromuscular degradation such as weakened vocal cords.

They first trained a general machine-learning algorithm, or neural network, known as ResNet50, to discriminate sounds associated with different degrees of vocal cord strength. Studies have shown that the quality of the sound “mmmm” can be an indication of how weak or strong a person’s vocal cords are. Subirana trained the neural network on an audiobook dataset with more than 1,000 hours of speech, to pick out the word “them” from other words like “the” and “then.”

The team trained a second neural network to distinguish emotional states evident in speech, because Alzheimer’s patients — and people with neurological decline more generally — have been shown to display certain sentiments such as frustration, or having a flat affect, more frequently than they express happiness or calm. The researchers developed a sentiment speech classifier model by training it on a large dataset of actors intonating emotional states, such as neutral, calm, happy, and sad.

The researchers then trained a third neural network on a database of coughs in order to discern changes in lung and respiratory performance.

Finally, the team combined all three models, and overlaid an algorithm to detect muscular degradation. The algorithm does so by essentially simulating an audio mask, or layer of noise, and distinguishing strong coughs — those that can be heard over the noise — over weaker ones.

With their new AI framework, the team fed in audio recordings, including of Alzheimer’s patients, and found it could identify the Alzheimer’s samples better than existing models. The results showed that, together, vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation were effective biomarkers for diagnosing the disease.

[…]

Surprisingly, as the researchers write in their paper, their efforts have revealed “a striking similarity between Alzheimer’s and Covid discrimination.”

[…]

Source: Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs

Daycares in Finland Built a ‘Forest Floor’, And It Changed Children’s Immune Systems

Playing through the greenery and litter of a mini forest’s undergrowth for just one month may be enough to change a child’s immune system, according to a small new experiment.

When daycare workers in Finland rolled out a lawn, planted forest undergrowth such as dwarf heather and blueberries, and allowed children to care for crops in planter boxes, the diversity of microbes in the guts and on the skin of young kids appeared healthier in a very short space of time.

Compared to other city kids who play in standard urban daycares with yards of pavement, tile and gravel, 3-, 4-, and 5-year-olds at these greened-up daycare centres in Finland showed increased T-cells and other important immune markers in their blood within 28 days.

“We also found that the intestinal microbiota of children who received greenery was similar to the intestinal microbiota of children visiting the forest every day,” says environmental scientist Marja Roslund from the University of Helsinki.

paivakodin pihatOne daycare before (left) and after introducing grass and planters (right). (University of Helsinki)

Prior research has shown early exposure to green space is somehow linked to a well-functioning immune system, but it’s still not clear whether that relationship is causal or not.

The experiment in Finland is the first to explicitly manipulate a child’s urban environment and then test for changes in their micriobiome and, in turn, a child’s immune system.

[…]

The results aren’t conclusive and they will need to be verified among larger studies around the world. Still, the benefits of green spaces appear to go beyond our immune systems.

Research shows getting outside is also good for a child’s eyesight, and being in nature as a kid is linked to better mental health. Some recent studies have even shown green spaces are linked to structural changes in the brains of children.

What’s driving these incredible results is not yet clear. It could be linked to changes to the immune system, or something about breathing healthy air, soaking in the sun, exercising more or having greater peace of mind.

Given the complexities of the real world, it’s really hard to control for all the environmental factors that impact our health in studies.

While rural children tend to have fewer cases of asthma and allergies, the available literature on the link between green spaces and these immune disorders is inconsistent.

The current research has a small sample size, only found a correlation, and can’t account for what children were doing outside daycare hours, but the positive changes seen are enough for scientists in Finland to offer some advice.

[…]

Bonding with nature as a kid is also good for the future of our planet’s ecosystems. Studies show kids who spend time outdoors are more likely to want to become environmentalists as adults, and in a rapidly changing world, that’s more important than ever.

Just make sure everyone’s up to date on their tetanus vaccinations, Sinkkonen advises.

The study was published in the Science Advances.

Source: Daycares in Finland Built a ‘Forest Floor’, And It Changed Children’s Immune Systems

Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls

The Brave web browser will soon block CNAME cloaking, a technique used by online marketers to defy privacy controls designed to prevent the use of third-party cookies.

The browser security model makes a distinction between first-party domains – those being visited – and third-party domains – from the suppliers of things like image assets or tracking code, to the visited site. Many of the online privacy abuses over the years have come from third-party resources like scripts and cookies, which is why third-party cookies are now blocked by default in Brave, Firefox, Safari, and Tor Browser.

Microsoft Edge, meanwhile, has a tiered scheme that defaults to a “Balanced” setting, which blocks some third-party cookies. Google Chrome has implemented its SameSite cookie scheme as a prelude to its planned 2022 phase-out of third-party cookies, maybe.

While Google tries to win support for its various Privacy Sandbox proposals, which aim to provide marketers with ostensibly privacy-preserving alternatives to increasingly shunned third-party cookies, marketers have been relying on CNAME shenanigans to pass their third-party trackers off as first-party resources.

The developers behind open-source content blocking extension uBlock Origin implemented a defense against CNAME-based tracking in November and now Brave has done so as well.

CNAME by name, cookie by nature

In a blog post on Tuesday, Anton Lazarev, research engineer at Brave Software, and senior privacy researcher Peter Snyder, explain that online tracking scripts may use canonical name DNS records, known as CNAMEs, to make associated third-party tracking domains look like they’re part of the first-party websites actually being visited.

They point to the site https://mathon.fr as an example, noting that without CNAME uncloaking, Brave blocks six requests for tracking scripts served by ad companies like Google, Facebook, Criteo, Sirdan, and Trustpilot.

But the page also makes four requests via a script hosted at a randomized path under the first-party subdomain 16ao.mathon.fr.

“Inspection outside of the browser reveals that 16ao.mathon.fr actually has a canonical name of et5.eulerian.net, meaning it’s a third-party script served by Eulerian,” observe Lazarev and Snyder.

When Brave 1.17 ships next month (currently available as a developer build), it will be able to uncloak the CNAME deception and block the Eulerian script.

Other browser vendors are planning related defenses. Mozilla has been working on a fix in Firefox since last November. And in August, Apple’s Safari WebKit team proposed a way to prevent CNAME cloaking from being used to bypass the seven-day cookie lifetime imposed by WebKit’s Intelligent Tracking Protection system

Source: Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls • The Register

Physical Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo

In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems.

The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually.

Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely.

[…]

Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure.

“I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.”

It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well.

[…]

Source: Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo — Krebs on Security

In a first, researchers extract secret key used to encrypt Intel CPU code

Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they’re secured.

The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it’s patching. The key may also allow parties other than Intel—say a malicious hacker or a hobbyist—to update chips with their own microcode, although that customized version wouldn’t survive a reboot.

“At the moment, it is quite difficult to assess the security impact,” independent researcher Maxim Goryachy said in a direct message. “But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates.” Goryachy and two other researchers—Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies—worked jointly on the project.

The key can be extracted for any chip—be it a Celeron, Pentium, or Atom—that’s based on Intel’s Goldmont architecture.

[…]

attackers can’t use Chip Red Pill and the decryption key it exposes to remotely hack vulnerable CPUs, at least not without chaining it to other vulnerabilities that are currently unknown. Similarly, attackers can’t use these techniques to infect the supply chain of Goldmont-based devices.

[…]

In theory, it might also be possible to use Chip Red Pill in an evil maid attack, in which someone with fleeting access to a device hacks it. But in either of these cases, the hack would be tethered, meaning it would last only as long as the device was turned on. Once restarted, the chip would return to its normal state. In some cases, the ability to execute arbitrary microcode inside the CPU may also be useful for attacks on cryptography keys, such as those used in trusted platform modules.

“For now, there’s only one but very important consequence: independent analysis of a microcode patch that was impossible until now,” Positive Technologies researcher Mark Ermolov said. “Now, researchers can see how Intel fixes one or another bug/vulnerability. And this is great. The encryption of microcode patches is a kind of security through obscurity.”

Source: In a first, researchers extract secret key used to encrypt Intel CPU code | Ars Technica

Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar – pig corpese involved

Philip Cooke, 55, oversaw eBay’s security operations in Europe and Asia and was a former police captain in Santa Clara, California. He pleaded guilty this week to conspiracy to commit cyberstalking and conspiracy to tamper with witnesses.

Cooke, based in San Jose, was just one of seven employees, including one manager, accused of targeting a married couple living on the other side of the United States, in Massachusetts, because they didn’t like their criticisms of eBay in the newsletter.

It’s said the team would post aggressive anonymous comments on the couple’s newsletter website, and at some point planned a concerted campaign against the pair including cyberstalking and harassment. Among other things, prosecutors noted, “several of the defendants ordered anonymous and disturbing deliveries to the victims’ home, including a preserved fetal pig, a bloody pig Halloween mask and a book on surviving the loss of a spouse.”

[…]

But it was when the couple noticed they were under surveillance in their own home they finally went to the cops in Natick, where they lived, and officers opened an investigation.

It was Cooke’s behavior at that point that led to the subsequent charge of conspiracy to tamper with a witness: he formulated a plan to give the Natick police a false lead in an effort to prevent them from discovering proof that his team had sent the pig’s head and other items. The eBay employees also deleted digital evidence that showed their involvement, prosecutors said, obstructing an investigation and breaking another law.

[…]

Source: Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar • The Register

NASA Discovers a Rare Metal Asteroid Worth $10,000 Quadrillion

NASA’s Hubble Space Telescope has discovered a rare, heavy and immensely valuable asteroid called “16 Psyche” in the Solar System’s main asteroid belt between Mars and Jupiter.

Asteroid Psyche is located at roughly 230 million miles (370 million kilometers) from Earth and measures 140 miles (226 kilometers) across, about the size of West Virginia. What makes it special is that, unlike most asteroids that are either rocky or icy, Psyche is made almost entirely of metals, just like the core of Earth, according to a study published in the Planetary Science Journal on Monday.

[…]

Given the asteroid’s size, its metal content could be worth $10,000 quadrillion ($10,000,000,000,000,000,000), or about 10,000 times the global economy as of 2019.

[…]

Psyche is the target of the NASA Discovery Mission Psyche, expected to launch in 2022 atop a SpaceX Falcon Heavy rocket. Further facts about the asteroid, including its exact metal content, will hopefully be uncovered when an orbiting probe arrives in early 2026.

[…]

The asteroid is believed to be the dead core left by a planet that failed during its formation early in the Solar System’s life or the result of many violent collisions in its distant past.

“Short of it being the Death Star… one other possibility is that it’s material that formed very near the Sun early in the Solar System,” Elkins-Tanton told Forbes in an interview in May, 2017 interview. “I figure we’re either going to go see something that’s really improbable and unique, or something that is completely astonishing.”

Source: NASA Discovers a Rare Metal Asteroid Worth $10,000 Quadrillion | Observer

I’d invest in the NASA mission, but it’s being launched on a SpaceX vehicle, which means that Musk will either send it the wrong direction (like his car) or more likely, it will blow up.

NSA: foreign spies used one of our crypto backdoors – we learnt some lessons but we lost them

It’s said the NSA drew up a report on what it learned after a foreign government exploited a weak encryption scheme, championed by the US spying agency, in Juniper firewall software.

However, curiously enough, the NSA has been unable to find a copy of that report.

On Wednesday, Reuters reporter Joseph Menn published an account of US Senator Ron Wyden’s efforts to determine whether the NSA is still in the business of placing backdoors in US technology products.

Wyden (D-OR) opposes such efforts because, as the Juniper incident demonstrates, they can backfire, thereby harming national security, and because they diminish the appeal of American-made tech products.

But Wyden’s inquiries, as a member of the Senate Intelligence Committee, have been stymied by lack of cooperation from the spy agency and the private sector. In June, Wyden and various colleagues sent a letter to Juniper CEO Rami Rahim asking about “several likely backdoors in its NetScreen line of firewalls.”

Juniper acknowledged in 2015 that “unauthorized code” had been found in ScreenOS, which powers its NetScreen firewalls. It’s been suggested that the code was in place since around 2008.

The Reuters report, citing a previously undisclosed statement to Congress from Juniper, claims that the networking biz acknowledged that “an unnamed national government had converted the mechanism first created by the NSA.”

Wyden staffers in 2018 were told by the NSA that a “lessons learned” report about the incident had been written. But Wyden spokesperson Keith Chu told Reuters that the NSA now claims it can’t find the file. Wyden’s office did not immediately respond to a request for comment.

The reason this malicious code was able to decrypt ScreenOS VPN connections has been attributed to Juniper’s “decision to use the NSA-designed Dual EC Pseudorandom Number Generator.”

[…]

After Snowden’s disclosures about the extent of US surveillance operations in 2013, the NSA is said to have revised its policies for compromising commercial products. Wyden and other lawmakers have tried to learn more about these policies but they’ve been stonewalled, according to Reuters.

[…]

Source: NSA: We’ve learned our lesson after foreign spies used one of our crypto backdoors – but we can’t say how exactly • The Register

And this is why you don’t put out insecure security products, which is exactly what products with a backdoor are. Here’s looking at you, UK and Australia and all the other countries trying to force insecure products on us.

Researchers develop new atomic layer deposition process

A new way to deposit thin layers of atoms as a coating onto a substrate material at near room temperatures has been invented at The University of Alabama in Huntsville (UAH), a part of the University of Alabama System.

UAH postdoctoral research associate Dr. Moonhyung Jang got the idea to use an ultrasonic atomization technology to evaporate chemicals used in (ALD) while shopping for a home humidifier.

Dr. Jang works in the laboratory of Dr. Yu Lei, an associate professor in the Department of Chemical Engineering. The pair have published a paper on their invention that has been selected as an editor’s pick in the Journal of Vacuum Science & Technology A.

“ALD is a three-dimensional thin film deposition technique that plays an important role in microelectronics manufacturing, in producing items such as central processing units, memory and hard drives,” says Dr. Lei.

Each ALD cycle deposits a layer a few atoms deep. An ALD process repeats the deposition cycle hundreds or thousands of times. The uniformity of the thin films relies on a surface self-limiting reaction between the chemical vapor and the substrates.

“ALD offers exceptional control of nanometer features while depositing materials uniformly on large silicon wafers for high volume manufacturing,” Dr. Lei says. “It is a key technique to produce powerful and small smart devices.”

[…]

“In the past, many reactive chemicals were considered not suitable for ALD because of their low vapor pressure and because they are thermally unstable,” says Dr. Lei. “Our research found that the ultrasonic atomizer technique enabled evaporating the reactive chemicals at as low as room temperature.”

The UAH scientists’ ultrasound invention makes it possible to use a wide range of reactive chemicals that are thermally unstable and not suitable for direct heating.

“Ultrasonic atomization, as developed by our research group, supplies low vapor pressure precursors because the evaporation of precursors was made through ultrasonic vibrating of the module,” Dr. Lei says.

“Like the household humidifier, ultrasonic atomization generates a mist consisting of saturated vapor and micro-sized droplets,” he says. “The micro-sized droplets continuously evaporate when the mist is delivered to the substrates by a carrier gas.”

The process uses a piezo-electric ultrasonic transducer placed in a liquid chemical precursor. Once started, the transducer starts to vibrate a few hundred thousand times per second and generates a mist of the chemical precursor. The small liquid droplets in the mist are quickly evaporated in the gas manifold under vacuum and mild heat treatment, leaving behind an even coat of the deposition material.

Source: Researchers develop new atomic layer deposition process

Water on the Moon: Research unveils its type and abundance – boosting exploration plans

“Water” has since been detected inside the minerals in lunar rocks. Water ice has also been discovered to be mixed in with lunar dust grains in cold, permanently shadowed regions near the lunar poles.

But scientists haven’t been sure how much of this water is present as “molecular water”—made up of two parts hydrogen and one part oxygen (H2O). Now two new studies published in Nature Astronomy provide an answer, while also giving an idea of how and where to extract it.

Source: Water on the Moon: Research unveils its type and abundance – boosting exploration plans

Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA. 1 EULA? 2 WTF?

Palo Alto Networks has threatened a startup with legal action after the smaller biz published a comparison review of one of its products.

Israel-based Orca Security received a cease-and-desist letter from a lawyer representing Palo Alto after Orca uploaded a series of online videos reviewing of one of Palo Alto’s products and compared it to its own. Orca sees itself as a competitor of Palo Alto Networks (PAN).

“What we expected is that others will also create such materials … but instead we received a letter from Palo Alto’s lawyers claiming we were not allowed to do that,” Orca chief exec Avi Shua told The Register this week. “We believe these are empty legal threats.”

In a note on its website, Orca lamented at length the “outrageous” behavior of PAN, as well as posting a copy of the lawyer’s letter for world-plus-dog to read. That letter claimed Orca infringed PAN’s trademarks by using its name and logo in the review as well as breaching non-review clauses in the End-User License Agreement (EULA) of PAN’s product.

As such, the lawyer demanded the removal of the comparison material, and that the startup stop using PAN’s logo and name. We note the videos are still online, hosted by YouTube.

“It’s outrageous that the world’s largest cybersecurity vendor, its products being used by over 65,000 organizations according to its website, believes that its users aren’t entitled to share any benchmark or performance comparison of its products,” said Orca.

The lawyer’s letter [PDF] claimed Orca violated PAN’s EULA fine-print, something deputy general counsel Melinda Thompson described in her missive as “a clear breach” of terms “prohibiting an end user from disclosing, publishing or otherwise making publicly available any benchmark, performance or comparison tests… run on Palo Alto Networks products, in whole or in part.”

Shua told The Register Orca tried to give its rival a fair crack of the whip: “Even if we tried to be objective, we would have some biases. But we did try to do it as objectively as possible, by showing it to users: creating labs, screenshots, and showing how it looks like.” The fairness of the review, we note, is not what is at issue here: PAN forbids any kind of benchmarking and comparison of its gear.

Palo Alto networks declined to comment when contacted by The Register.

Source: Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA • The Register

1 Who reads EULAs anyway? Are they in any way, shape or form defensible apart from maybe some ant fucker friendless lawyers?

2 Is PAN so very worried about the poor quality of their product that they feel they want to kill any and all benchmarks / comparisons?

Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – yes, copyright really doesn’t provide for  innovation at all

“It’s finally happening: Twitch is taking action against copyrighted music — long a norm among streamers — in response to music industry pressure,” reports Kotaku.

But the Verge reports “there’s some funny stuff going on here.” First, Twitch is telling streamers that some of their content has been identified as violating copyright and that instead of letting streamers file counterclaims, it’s deleting the content; second, the company is telling streamers it’s giving them warnings, as opposed to outright copyright strikes…

Weirdly Twitch decided to bulk delete infringing material instead of allowing streamers to archive their content or submit counterclaims. To me, that suggests that there are tons of infringements, and that Twitch needed to act very quickly and/or face a lawsuit it wouldn’t be able to win over its adherence to the safe harbor provision of the DMCA.
The email Twitch sent to their users “encourages them to delete additional content — up to and including using a new tool to unilaterally delete all previous clips,” reports Kotaku. One business streamer complains that it’s “insane” that Twitch basically informs them “that there is more content in violation despite having no identification system to find out what it is. Their solution to DMCA is for creators to delete their life’s work. This is pure, gross negligence.”

Or, as esports consultant Rod “Slasher” Breslau puts it, “It is absolutely insane that record labels have put Twitch in a position to force streamers to delete their entire life’s work, for some 10+ years of memories, and that Twitch has been incapable of preventing or aiding streamers for this situation. a total failure all around.”

Twitch’s response? It is crucial that we protect the rights of songwriters, artists and other music industry partners. We continue to develop tools and resources to further educate our creators and empower them with more control over their content while partnering with industry-recognized vendors in the copyright space to help us achieve these goals.

Source: Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – Slashdot

Of course, the money raised by these music companies doesn’t really go to the artists much – it’s basically swallowed up by the music companies themselves.

Samsung, Stanford make a 10,000PPI display that could lead to ‘flawless’ VR

Ask VR fans about their gripes and they’ll likely mention the “screen door” effect, or the gaps between pixels that you notice when looking at a display so close to your eyes. That annoyance might disappear entirely if Samsung and Stanford University have their way. They’ve developed (via IEEE Spectrum) OLED technology that supports resolutions up to 10,000 pixels per inch — well above what you see in virtually any existing display, let alone what you’d find in a modern VR headset like the Oculus Quest 2.

The newOLED tech uses films to emit white light between reflective layers, one silver and another made of reflective metal with nano-sized corrugations. This “optical metasurface” changes the reflective properties and allows specific colors to resonate through pixels. The design allows for much higher pixel densities than you see in the RGB OLEDs on phones, but doesn’t hurt brightness to the degree you see with white OLEDs in some TVs.

This would be ideal for VR and AR, creating a virtually ‘flawless’ image where you can’t see the screen door effect or even individual pixels. This might take years to arrive when it would require much more computing power, but OLED tech would no longer be an obstacle.

It’s also more practical than you might think. Samsung is already working on a “full-size” display using the 10,000PPI tech, and the design of the corrugations makes large-scale manufacturing viable. It may just be a question of when and where you see this OLED rather than “if.”

Source: Samsung, Stanford make a 10,000PPI display that could lead to ‘flawless’ VR | Engadget

About 3% of Starlink satellites have failed so far – that’s 360 potential collisions now and 1,260 once SL is up

To date, the company has launched over 800 satellites and (as of this summer) is producing them at a rate of about 120 a month. There are even plans to have a constellation of 42,000 satellites in orbit before the decade is out.

However, there have been some problems along the way, as well. Aside from the usual concerns about and radio frequency interference (RFI), there is also the rate of failure these satellites have experienced. Specifically, about 3% of its satellites have proven to be unresponsive and are no longer maneuvering in , which could prove hazardous to other satellites and spacecraft in orbit.

In order to prevent collisions in orbit, SpaceX equips its satellites with krypton Hall-effect thrusters (ion engines) to raise their orbit, maneuver in space and deorbit at the end of their lives. However, according to two recent notices SpaceX issued to the Federal Communications Commission (FCC) over the summer (mid-May and late June), several of their satellites have lost maneuvering capability since they were deployed.

Unfortunately, the company did not provide enough information to indicate which of their satellites were affected. For this reason, astrophysicist Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics (CfA) and the Chandra X-ray Center presented his own analysis of the satellites’ orbital behavior to suggest which satellites have failed.

The analysis was posted on McDowell’s website (Jonathan’s Space Report), where he combined SpaceX’s own data with U.S. government sources. From this, he determined that about 3% of satellites in the constellation have failed because they are no longer responding to commands. Naturally, some level of attrition is inevitable, and 3% is relatively low as failure rates go.

But every that is incapable of maneuvering due to problems with its communications or its propulsion system creates a collision hazard for other satellites and spacecraft. As McDowell told Business Insider:

Artist’s impression of the orbital debris problem. Credit: UC3M

“I would say their failure rate is not egregious. It’s not worse than anybody else’s failure rates. The concern is that even a normal failure rate in such a huge constellation is going to end up with a lot of bad space junk.”

Kessler syndrome

Named after NASA scientists Donald J. Kessler, who first proposed it in 1978, Kessler syndrome refers to the threat posed by collisions in orbit. These lead to catastrophic breakups that create more debris that will lead to further collisions and breakups, and so on. When one takes into account rates of failure and SpaceX’s long-term plans for a “megaconstellation,” this syndrome naturally rears its ugly head.

Not long ago, SpaceX secured permission from the Federal Communications Commission (FCC) to deploy about 12,000 Starlink satellites to orbits ranging from 328 km to 580 km (200 to 360 mi). However, more recent filings with the International Telecommunications Union (ITU) show that the company hopes to create a megaconstellation of as many as 42,000 satellites.

In this case, a 3% failure rate works out to 360 and 1,260 (respectively) 250 kg (550 lbs) satellites becoming defunct over time. As of February of 2020, according to the ESA’s Space Debris Office (SDO), there are currently 5,500 satellites in orbit of Earth—around 2,300 of which are still operational. That means (employing naked math) that a full Starlink megaconstellation would increase the number of non-functioning satellites in orbit by 11% to 40%.

The problem of debris and collisions looks even more threatening when you consider the amount of debris in orbit. Beyond non-functioning satellites, the SDO also estimates that there are currently 34,000 objects in orbit measuring more than 10 cm (~4 inches) in diameter, 900,000 objects between 1 cm to 10 cm (0.4 to 4 in), and 128 million objects between 1 mm to 1 cm.

Source: About 3% of Starlink satellites have failed so far

Well done yet again mr Elon Musk

Oculus owners forced on Facebook accounts, will have purchases be wiped, device bricked, if they ever leave FB. Who would have guessed?

Oculus users, already fuming at Facebook chaining their VR headsets to their Facebook accounts, have been warned they could lose all their Oculus purchases and account information in future if they ever delete their profile on the social network.

The rule is a further binding of the gaming company that Facebook bought in 2014 to the mothership, and comes just two months after Facebook decided all new Oculus users require Facebook accounts to use their VR gizmos, and all current Oculus users will need a Facebook account by 2023. Failure to do so may cause apps installed on the headsets to no longer work as expected.

The decision to cement together what many users see as largely unrelated activities – playing video games and social media posts – has led to a wave of anger among Oculus users, and a renewed online effort to jailbreak new Oculus headgear to bypass Facebook’s growing restrictions.

That outrage was fueled when Facebook initially said that if people attempted to connect more than one Oculus headset to a single Facebook account, something families in particular want to do as it avoids having to install the same app over and over, it would ban them from the service.

Facebook has since dropped that threat, and said it is working on allowing multiple devices and accounts to connect. But the control-freak instincts of the internet giant were yet again on full display, something that was noted by the man who first drew attention to Oculus’s new terms and conditions, CEO of fitness gaming company Yur, Cix Liv.

“My favorite line is ‘While I do completely understand your concerns, we do need to have you comply with the Facebook terms of service’ like Facebook thinks they are some authoritarian government,” he tweeted.

[,,,]

Source: Oculus owners told not only to get Facebook accounts, purchases will be wiped if they ever leave social network • The Register

LG’s rollable OLED TV goes on sale for $87,000

After years of teasing, LG is finally selling a rollable OLED TV. The RX-branded Signature OLED R launched in South Korea today, offering a 65-inch 4K display that tucks away into its base at the press of a button. Besides being able to hide completely, as LG has promised in CES previews, the TV has different settings (Full View, Line View and Zero View) for different situations.

Source: LG’s rollable OLED TV goes on sale for $87,000 | Engadget

The Department of Justice sues Google over antitrust concerns | Engadget

We all knew it was coming. Today, the US government’s Department of Justice filed an antitrust lawsuit against Google. The company, which is a part of Alphabet, is accused of having an unfair monopoly over search and search-related advertising. In addition, the department disagrees with the terms around Android, the most widely-used mobile operating system, that forces phone manufacturers to pre-load Google applications and set Google as the default search engine. That decision stops rival search providers from gaining traction and, as a consequence, ensures that Google continues to make enormous amounts of cash via search-related advertising.

“Google pays billions of dollars each year to distributors—including popular-device manufacturers such as Apple, LG, Motorola, and Samsung; major U.S. wireless carriers such as AT&T, T-Mobile, and Verizon; and browser developers such as Mozilla, Opera, and UCWeb— to secure default status for its general search engine and, in many cases, to specifically prohibit Google’s counterparties from dealing with Google’s competitors,” the lawsuit filing reads.

[…]

Walker also argued that Google competes with platforms such as Twitter, Expedia and OpenTable, which let you search for news, flights and restaurant reservations respectively.” Every day, Americans choose to use all these services and thousands more,” he said.

Some of Google’s rivals feel differently. “We’re pleased the DOJ has taken this key step in holding Google accountable for the ways it has blocked competition, locked people into using its products, and achieved a market position so dominant they refuse to even talk about it out loud,” Gabriel Weinberg, CEO of search engine provider DuckDuckGo said in a Twitter thread. “While Google’s anti-competitive practices hurt companies like us, the negative impact on society and democracy wrought by their surveillance business model is far worse. People should be able to opt out in one click.”

As the Wall Street Journal explains, the Justice Department has been preparing to launch this case for over a year. “Over the course of the last 16 months, the Antitrust Division collected convincing evidence that Google no longer competes only on the merits but instead uses its monopoly power – and billions in monopoly profits – to lock up key pathways to search on mobile phones, browsers, and next generation devices, depriving rivals of distribution and scale,” the Department said in a statement today.

[…]

Today’s lawsuit is arguably the biggest antitrust move since the government’s case against Microsoft in 1998. Back then, the technology company was accused of using its Windows monopoly to push Microsoft-made software such as Internet Explorer. A judge eventually ordered Microsoft to break up into two separate companies. The technology giant appealed, however, and by the end of 2001 it had reached a settlement with the department. “Back then, Google claimed Microsoft’s practices were anticompetitive, and yet, now, Google deploys the same playbook to sustain its own monopolies,” the Justice Department argues in today’s lawsuit filing.

Source: The Department of Justice sues Google over antitrust concerns | Engadget

The timing of this is not coincidental. The DoJ was apparently pushed into this before it was ready in order to look good for the elections.

I have been talking about this since early 2019 and it’s great to see how this has been gaining traction since then


 

eBay makes a dedicated portal for officially refurbished gear

eBay is taking on Amazon Warehouse with a new destination called Certified Refurbished, selling used goods from brands like Lenovo, Microsoft and Makita. The idea is that you can buy second-hand products at significant discounts over new, but still get a two-year warranty (from Allstate), a money-back guarantee and 30-day “hassle-free” returns, along with new accessories, manuals and manufacturer-sealed packaging.

eBay’s Certified Refurbished has five priority categories: laptops, portable audio,power tools, small kitchen appliances and vacuums. It offers several brand exclusives, including De’Longhi, Dirt Devil, Hoover, Makita and Philips, along with inventory exclusives from Dewalt, iRobot and Skullcandy. It’s also selling products from participating brands including Dell, Acer, Bissel, Black & Decker, Cuisinart, KitchenAid, Lenovo, Microsoft, Miele and Sennheiser.

To make the cut, manufacturers must offer items in “pristine, like-new condition that has been professionally inspected, cleaned, and refurbished by the manufacturer, or a manufacturer-approved vendor,” according to eBay. It also must be in new packaging with original or new accessories.

Source: eBay makes a dedicated portal for officially refurbished gear | Engadget

Climate change and flying: what share of global CO2 emissions come from aviation?

Flying is a highly controversial topic in climate debates. There are a few reasons for this.

The first is the disconnect between its role in our personal and collective carbon emissions. Air travel dominates frequent travellers’ individual contributions to climate change. Yet aviation overall accounts for only 2.5% of global carbon dioxide (CO2) emissions. This is because there are large inequalities in how much people fly – many do not, or cannot afford to, fly at all [best estimates put this figure at around 80% of the world population – we will look at this in more detail in an upcoming article].

The second is how aviation emissions are attributed to countries. CO2 emissions from domestic flights are counted in a country’s emission accounts. International flights are not – instead they are counted as their own category: ‘bunker fuels’. The fact that they don’t count towards the emissions of any country means there are few incentives for countries to reduce them.

It’s also important to note that unlike the most common greenhouse gases – carbon dioxide, methane or nitrous oxide – non-CO2 forcings from aviation are not included in the Paris Agreement. This means they could be easily overlooked – especially since international aviation is not counted within any country’s emissions inventories or targets.

How much of a role does aviation play in global emissions and climate change? In this article we take a look at the key numbers that are useful to know.

Global aviation (including domestic and international; passenger and freight) accounts for:

  • 1.9% of greenhouse gas emissions (which includes all greenhouse gases, not only CO2)
  • 2.5% of CO2 emissions
  • 3.5% of ‘radiative forcing’. Radiative forcing measures the difference between incoming energy and the energy radiated back to space. If more energy is absorbed than radiated, the atmosphere becomes warmer.

The latter two numbers refer to 2018, and the first to 2016, the latest year for which such data are available.


Aviation accounts for 2.5% of global CO2 emissions

As we will see later in this article, there are a number of processes by which aviation contributes to climate change. But the one that gets the most attention is its contribution via CO2 emissions. Most flights are powered by jet gasoline – although some partially run on biofuels – which is converted to CO2 when burned.

In a recent paper, researchers – David Lee and colleagues – reconstructed annual CO2 emissions from global aviation dating back to 1940. This was calculated based on fuel consumption data from the International Energy Agency (IEA), and earlier estimates from Robert Sausen and Ulrich Schumann (2000).

The time series of global emissions from aviation since 1940 is shown in the accompanying chart. In 2018, it’s estimated that global aviation – which includes both passenger and freight – emitted 1.04 billion tonnes of CO2.

This represented 2.5% of total CO2 emissions in 2018.,

Aviation emissions have doubled since the mid-1980s. But, they’ve been growing at a similar rate as total CO2 emissions – this means its share of global emissions has been relatively stable: in the range of 2% to 2.5%.

Global co2 emissions from aviation

Non-CO2 climate impacts mean aviation accounts for 3.5% of global warming

Aviation accounts for around 2.5% of global CO2 emissions, but it’s overall contribution to climate change is higher. This is because air travel does not only emit CO2: it affects the climate in a number of more complex ways.

As well as emitting CO2 from burning fuel, planes affect the concentration of other gases and pollutants in the atmosphere. They result in a short-term decrease, but long-term increase in ozone (O3); a decrease in methane (CH4); emissions of water vapour; soot; sulfur aerosols; and water contrails. While some of these impacts result in warming, while others induce a cooling effect. Overall, the warming effect is stronger.

David Lee et al. (2020) quantified the overall effect of aviation on global warming when all of these impacts were included. To do this they calculated the so-called ‘Radiative Forcing’. Radiative forcing measures the difference between incoming energy and the energy radiated back to space. If more energy is absorbed than radiated, the atmosphere becomes warmer.

In this chart we see their estimates for the radiative forcing of the different elements. When we combine them, aviation accounts for approximately 3.5% of net radiative forcing: that is, 3.5% of warming.

Although COgets most of the attention, it accounts for less than half of this warming. Two-thirds (66%) comes from non-CO2 forcings. Contrails – water vapor trails from aircraft exhausts – account for the largest share.

We don’t yet have the technologies to decarbonize air travel

Aviation’s contribution to climate change – 3.5% of warming, or 2.5% of CO2 emissions – is often less than people think. It’s currently a relatively small chunk of emissions compared to other sectors.

The key challenge is that it is particularly hard to decarbonize. We have solutions to reduce emissions for many of the largest emitters – such as power or road transport – and it’s now a matter of scaling them. We can deploy renewable and nuclear energy technologies, and transition to electric cars. But we don’t have proven solutions to tackle aviation yet.

There are some design concepts emerging – Airbus, for example, have announced plans to have the first zero-emission aircraft by 2035, using hydrogen fuel cells. Electric planes may be a viable concept, but are likely to be limited to very small aircraft due to the limitations of battery technologies and capacity.

Innovative solutions may be on the horizon, but they’re likely to be far in the distance.

Appendix: Efficiency improvements means air traffic has increased more rapidly than emissions

Global emissions from aviation have increased a lot over the past half-century. However, air travel volumes increased even more rapidly.

Since 1950, aviation emissions increased almost seven-fold; since 1960 they’ve tripled. Air traffic volume – here defined as revenue passenger kilometers (RPK) traveled – increased by orders of magnitude more: almost 300-fold since 1950; and 75-fold since 1960 [you find this data in our interactive chart here].

The much slower growth in emissions means aviation efficiency has seen massive improvements. In the chart we show both the increase in global airline traffic since 1950, and aviation efficiency, measured as the quantity of CO2 emitted per revenue passenger kilometer traveled. In 2018, approximately 125 grams of CO2  were emitted per RPK. In 1960, this was eleven-fold higher; in 1950 it was twenty-fold higher. Aviation has seen massive efficiency improvements over the past 50 years.

These improvements have come from several sources: improvements in the design and technology of aircraft; larger aircraft sizes (allowing for more passengers per flight); and an increase in how ‘full’ passenger flights are. This last metric is termed the ‘passenger load factor’. The passenger load factor measures the actual number of kilometers traveled by paying customers (RPK) as a percentage of the available seat kilometers (ASK) – the kilometers traveled if every plane was full. If every plane was full the passenger load factor would be 100%. If only three-quarters of the seats were filled, it would be 75%.

The global passenger load factor increased from 61% in 1950 to 82% in 2018 [you find this data in our interactive chart here].

Source: Climate change and flying: what share of global CO2 emissions come from aviation? – Our World in Data

When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube

Google exempts its own websites from Chrome’s automatic data-scrubbing feature, allowing the ads giant to potentially track you even when you’ve told it not to.

Programmer Jeff Johnson noticed the unusual behavior, and this month documented the issue with screenshots. In his assessment of the situation, he noted that if you set up Chrome, on desktop at least, to automatically delete all cookies and so-called site data when you quit the browser, it deletes it all as expected – except your site data for Google.com and YouTube.com.

While cookies are typically used to identify you and store some of your online preferences when visiting websites, site data is on another level: it includes, among other things, a storage database in which a site can store personal information about you, on your computer, that can be accessed again by the site the next time you visit. Thus, while your Google and YouTube cookies may be wiped by Chrome, their site data remains on your computer, and it could, in future, be used to identify you.

Johnson noted that after he configured Chrome to wipe all cookies and site data when the application closed, everything was cleared as expected for sites like apple.com. Yet, the main Google search site and video service YouTube were allowed to keep their site data, though the cookies were gone. If Google chooses at some point to stash the equivalent of your Google cookies in the Google.com site data storage, they could be retrieved next time you visit Google, and identify you, even though you thought you’d told Chrome not to let that happen.

Ultimately, it potentially allows Google, and only Google, to continue tracking Chrome users who opted for some more privacy; something that is enormously valuable to the internet goliath in delivering ads. Many users set Chrome to automatically delete cookies-and-site-data on exit for that reason – to prevent being stalked around the web – even though it often requires them to log back into websites the next time they visit due to their per-session cookies being wiped.

Yet Google appears to have granted itself an exception. The situation recalls a similar issue over location tracking, where Google continued to track people’s location through their apps even when users actively selected the option to prevent that. Google had put the real option to start location tracking under a different setting that didn’t even include the word “location.”

In this case, “Clear cookies and site data when you quit Chrome” doesn’t actually mean what it says, at least not for Google.

There is a workaround: you can manually add “Google.com” and “YouTube.com” within the browser to a list of “Sites that can never use cookies.” In that case, no information, not even site data, is saved from those sites, which is all in all a little confusing.

[…]

 

Source: When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube • The Register

Announcing: Graph-Native Machine Learning in Neo4j!

We’re delighted to announce you can now take advantage of graph-native machine learning (ML) inside of Neo4j! We’ve just released a preview of Neo4j’s Graph Data Science™ Library version 1.4, which includes graph embeddings and an ML model catalog.

Together, these enable you to create representations of your graph and make graph predictions – all within Neo4j.

[…]

Graph Embeddings

The graph embedding algorithms are the star of the show in this release.

These algorithms are used to transform the topology and features of your graph into fixed-length vectors (or embeddings) that uniquely represent each node.

Graph embeddings are powerful, because they preserve the key features of the graph while reducing dimensionality in a way that can be decoded. This means you can capture the complexity and structure of your graph and transform it for use in various ML predictions.

 

Graph embeddings capture the nuances of graphs in a way that can be used to make predictions or lower dimensional visualizations.

In this release, we are offering three embedding options that learn the graph topology and, in some cases, node properties to calculate more accurate representations:

Node2Vec:

    • This is probably the most well-known graph embedding algorithm. It uses random walks to sample a graph, and a neural network to learn the best representation of each node.

FastRP:

    • A more recent graph embedding algorithm that uses linear algebra to project a graph into lower dimensional space. In GDS 1.4, we’ve extended the original implementation to support node features and directionality as well.
    • FastRP is up to 75,000 times faster than Node2Vec, while providing equivalent accuracy!

GraphSAGE:

    • This is an embedding technique using inductive representation learning on graphs, via graph convolutional neural networks, where the graph is sampled to learn a function that can predict embeddings (rather than learning embeddings directly). This means you can learn on a subset of your graph and use that representative function for new data and make continuous predictions as your graph updates. (Wow!)
    • If you’d like a deeper dive into how it works, check out the GraphSAGE session from the NODES event.

 

 

Graph embeddings available in the Neo4j Graph Data Science Library v1.4 . The caution marks indicate that, while directions are supported, our internal benchmarks don’t show performance improvements.

Graph ML Model Catalog

GraphSAGE trains a model to predict node embeddings for unseen parts of the graph, or new data as mentioned above.

To really capitalize on what GraphSAGE can do, we needed to add a catalog to be able to store and reference these predictive models. This model catalog lives in the Neo4j analytics workspace and contains versioning information (what data was this trained on?), time stamps and, of course, the model names.

When you want to use a model, you can provide the name of the model to GraphSAGE, along with the named graph you want to apply it to.

 

GraphSAGE ML Models are stored in the Neo4j analytics workspace.

[…]

Source: Announcing: Graph-Native Machine Learning in Neo4j!

Amazon’s Stops Pretending and launches anticompetitive New Panel Program

After spending years promising Congress that the data it collected from third-party sellers wasn’t used to beef up its private-label products, today Amazon decided to roll out a product meant to do exactly that. The Amazon Shopper Panel, as it’s called, promises to pay Amazon customers that offer intel to the ecommerce giant about where they shop when they’re not shopping on Amazon dot com.

Here’s how the Shopper Panel works: After getting an IRL or e-receipt from any business that isn’t owned by Amazon (so Whole Foods or Four Star locations are not eligible), panelists can either submit a picture of that receipt through the app, or in the case of digital copies, forward their emailed details to a panel-specific email address. According to the Panel website, folks that upload “at least” 10 receipts per month can either cash that in for $10 in Amazon credit or $10 donated to their charity of choice. Along with that baseline payout, the app will also dole out additional earnings to panelists who answer the occasional survey about certain brands or products within the app.

Not every receipt counts toward this program. Per Amazon, receipts from grocery stores, drug stores, restaurants, and movie theaters—along with just about any other “retailer” or “entertainment outlet”—are fair game. Receipts from casinos, gun stores, transit fare, tuition or apartment rentals aren’t.

While the program is invite-only for now, any curious Amazon customer based in the U.S. can download the Panel app from the iOS App Store or the Google Play Store if they want to put their name on the waitlist.

[…]

nder the Amazon Panel site’s “Privacy” tab, the company notes that any receipts that you share will go toward “[helping] brands offer better products and [making] ads more relevant on Amazon.” The company also notes any data gleaned from these receipts or surveys might also be used to “ improve the product selection on Amazon.com and affiliate stores such as Whole Foods Market,” and to “improve the content offered through Amazon services such as Prime Video.”

That’s why this rollout is a particularly gutsy move for Amazon to take right now. Recent months have seen the company come under an increasing barrage of regulatory fire from authorities both in the U.S. and in Europe over a scandal that largely revolved around tracking consumers’ purchase data—not unlike the data pulled from the average receipt—on its platform. This past spring, an investigation from the Wall Street Journal revealed that Amazon had spent years surveilling the purchases earned by the platform’s third party sellers specifically to create its own competing products under the Amazon private label. This story came out barely a year after Amazon’s associate general council, Nate Sutton, told Congress that the company didn’t use “individual seller data” to do just that.

[…]

Source: Amazon’s New Panel Program Is An Anticompetitive Nightmare