The Linkielist

Linking ideas with the world

The Linkielist

Daycares in Finland Built a ‘Forest Floor’, And It Changed Children’s Immune Systems

Playing through the greenery and litter of a mini forest’s undergrowth for just one month may be enough to change a child’s immune system, according to a small new experiment.

When daycare workers in Finland rolled out a lawn, planted forest undergrowth such as dwarf heather and blueberries, and allowed children to care for crops in planter boxes, the diversity of microbes in the guts and on the skin of young kids appeared healthier in a very short space of time.

Compared to other city kids who play in standard urban daycares with yards of pavement, tile and gravel, 3-, 4-, and 5-year-olds at these greened-up daycare centres in Finland showed increased T-cells and other important immune markers in their blood within 28 days.

“We also found that the intestinal microbiota of children who received greenery was similar to the intestinal microbiota of children visiting the forest every day,” says environmental scientist Marja Roslund from the University of Helsinki.

paivakodin pihatOne daycare before (left) and after introducing grass and planters (right). (University of Helsinki)

Prior research has shown early exposure to green space is somehow linked to a well-functioning immune system, but it’s still not clear whether that relationship is causal or not.

The experiment in Finland is the first to explicitly manipulate a child’s urban environment and then test for changes in their micriobiome and, in turn, a child’s immune system.

[…]

The results aren’t conclusive and they will need to be verified among larger studies around the world. Still, the benefits of green spaces appear to go beyond our immune systems.

Research shows getting outside is also good for a child’s eyesight, and being in nature as a kid is linked to better mental health. Some recent studies have even shown green spaces are linked to structural changes in the brains of children.

What’s driving these incredible results is not yet clear. It could be linked to changes to the immune system, or something about breathing healthy air, soaking in the sun, exercising more or having greater peace of mind.

Given the complexities of the real world, it’s really hard to control for all the environmental factors that impact our health in studies.

While rural children tend to have fewer cases of asthma and allergies, the available literature on the link between green spaces and these immune disorders is inconsistent.

The current research has a small sample size, only found a correlation, and can’t account for what children were doing outside daycare hours, but the positive changes seen are enough for scientists in Finland to offer some advice.

[…]

Bonding with nature as a kid is also good for the future of our planet’s ecosystems. Studies show kids who spend time outdoors are more likely to want to become environmentalists as adults, and in a rapidly changing world, that’s more important than ever.

Just make sure everyone’s up to date on their tetanus vaccinations, Sinkkonen advises.

The study was published in the Science Advances.

Source: Daycares in Finland Built a ‘Forest Floor’, And It Changed Children’s Immune Systems

Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls

The Brave web browser will soon block CNAME cloaking, a technique used by online marketers to defy privacy controls designed to prevent the use of third-party cookies.

The browser security model makes a distinction between first-party domains – those being visited – and third-party domains – from the suppliers of things like image assets or tracking code, to the visited site. Many of the online privacy abuses over the years have come from third-party resources like scripts and cookies, which is why third-party cookies are now blocked by default in Brave, Firefox, Safari, and Tor Browser.

Microsoft Edge, meanwhile, has a tiered scheme that defaults to a “Balanced” setting, which blocks some third-party cookies. Google Chrome has implemented its SameSite cookie scheme as a prelude to its planned 2022 phase-out of third-party cookies, maybe.

While Google tries to win support for its various Privacy Sandbox proposals, which aim to provide marketers with ostensibly privacy-preserving alternatives to increasingly shunned third-party cookies, marketers have been relying on CNAME shenanigans to pass their third-party trackers off as first-party resources.

The developers behind open-source content blocking extension uBlock Origin implemented a defense against CNAME-based tracking in November and now Brave has done so as well.

CNAME by name, cookie by nature

In a blog post on Tuesday, Anton Lazarev, research engineer at Brave Software, and senior privacy researcher Peter Snyder, explain that online tracking scripts may use canonical name DNS records, known as CNAMEs, to make associated third-party tracking domains look like they’re part of the first-party websites actually being visited.

They point to the site https://mathon.fr as an example, noting that without CNAME uncloaking, Brave blocks six requests for tracking scripts served by ad companies like Google, Facebook, Criteo, Sirdan, and Trustpilot.

But the page also makes four requests via a script hosted at a randomized path under the first-party subdomain 16ao.mathon.fr.

“Inspection outside of the browser reveals that 16ao.mathon.fr actually has a canonical name of et5.eulerian.net, meaning it’s a third-party script served by Eulerian,” observe Lazarev and Snyder.

When Brave 1.17 ships next month (currently available as a developer build), it will be able to uncloak the CNAME deception and block the Eulerian script.

Other browser vendors are planning related defenses. Mozilla has been working on a fix in Firefox since last November. And in August, Apple’s Safari WebKit team proposed a way to prevent CNAME cloaking from being used to bypass the seven-day cookie lifetime imposed by WebKit’s Intelligent Tracking Protection system

Source: Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls • The Register

Physical Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo

In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems.

The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually.

Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely.

[…]

Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure.

“I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.”

It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well.

[…]

Source: Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo — Krebs on Security

In a first, researchers extract secret key used to encrypt Intel CPU code

Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they’re secured.

The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it’s patching. The key may also allow parties other than Intel—say a malicious hacker or a hobbyist—to update chips with their own microcode, although that customized version wouldn’t survive a reboot.

“At the moment, it is quite difficult to assess the security impact,” independent researcher Maxim Goryachy said in a direct message. “But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates.” Goryachy and two other researchers—Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies—worked jointly on the project.

The key can be extracted for any chip—be it a Celeron, Pentium, or Atom—that’s based on Intel’s Goldmont architecture.

[…]

attackers can’t use Chip Red Pill and the decryption key it exposes to remotely hack vulnerable CPUs, at least not without chaining it to other vulnerabilities that are currently unknown. Similarly, attackers can’t use these techniques to infect the supply chain of Goldmont-based devices.

[…]

In theory, it might also be possible to use Chip Red Pill in an evil maid attack, in which someone with fleeting access to a device hacks it. But in either of these cases, the hack would be tethered, meaning it would last only as long as the device was turned on. Once restarted, the chip would return to its normal state. In some cases, the ability to execute arbitrary microcode inside the CPU may also be useful for attacks on cryptography keys, such as those used in trusted platform modules.

“For now, there’s only one but very important consequence: independent analysis of a microcode patch that was impossible until now,” Positive Technologies researcher Mark Ermolov said. “Now, researchers can see how Intel fixes one or another bug/vulnerability. And this is great. The encryption of microcode patches is a kind of security through obscurity.”

Source: In a first, researchers extract secret key used to encrypt Intel CPU code | Ars Technica

Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar – pig corpese involved

Philip Cooke, 55, oversaw eBay’s security operations in Europe and Asia and was a former police captain in Santa Clara, California. He pleaded guilty this week to conspiracy to commit cyberstalking and conspiracy to tamper with witnesses.

Cooke, based in San Jose, was just one of seven employees, including one manager, accused of targeting a married couple living on the other side of the United States, in Massachusetts, because they didn’t like their criticisms of eBay in the newsletter.

It’s said the team would post aggressive anonymous comments on the couple’s newsletter website, and at some point planned a concerted campaign against the pair including cyberstalking and harassment. Among other things, prosecutors noted, “several of the defendants ordered anonymous and disturbing deliveries to the victims’ home, including a preserved fetal pig, a bloody pig Halloween mask and a book on surviving the loss of a spouse.”

[…]

But it was when the couple noticed they were under surveillance in their own home they finally went to the cops in Natick, where they lived, and officers opened an investigation.

It was Cooke’s behavior at that point that led to the subsequent charge of conspiracy to tamper with a witness: he formulated a plan to give the Natick police a false lead in an effort to prevent them from discovering proof that his team had sent the pig’s head and other items. The eBay employees also deleted digital evidence that showed their involvement, prosecutors said, obstructing an investigation and breaking another law.

[…]

Source: Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar • The Register

NASA Discovers a Rare Metal Asteroid Worth $10,000 Quadrillion

NASA’s Hubble Space Telescope has discovered a rare, heavy and immensely valuable asteroid called “16 Psyche” in the Solar System’s main asteroid belt between Mars and Jupiter.

Asteroid Psyche is located at roughly 230 million miles (370 million kilometers) from Earth and measures 140 miles (226 kilometers) across, about the size of West Virginia. What makes it special is that, unlike most asteroids that are either rocky or icy, Psyche is made almost entirely of metals, just like the core of Earth, according to a study published in the Planetary Science Journal on Monday.

[…]

Given the asteroid’s size, its metal content could be worth $10,000 quadrillion ($10,000,000,000,000,000,000), or about 10,000 times the global economy as of 2019.

[…]

Psyche is the target of the NASA Discovery Mission Psyche, expected to launch in 2022 atop a SpaceX Falcon Heavy rocket. Further facts about the asteroid, including its exact metal content, will hopefully be uncovered when an orbiting probe arrives in early 2026.

[…]

The asteroid is believed to be the dead core left by a planet that failed during its formation early in the Solar System’s life or the result of many violent collisions in its distant past.

“Short of it being the Death Star… one other possibility is that it’s material that formed very near the Sun early in the Solar System,” Elkins-Tanton told Forbes in an interview in May, 2017 interview. “I figure we’re either going to go see something that’s really improbable and unique, or something that is completely astonishing.”

Source: NASA Discovers a Rare Metal Asteroid Worth $10,000 Quadrillion | Observer

I’d invest in the NASA mission, but it’s being launched on a SpaceX vehicle, which means that Musk will either send it the wrong direction (like his car) or more likely, it will blow up.

NSA: foreign spies used one of our crypto backdoors – we learnt some lessons but we lost them

It’s said the NSA drew up a report on what it learned after a foreign government exploited a weak encryption scheme, championed by the US spying agency, in Juniper firewall software.

However, curiously enough, the NSA has been unable to find a copy of that report.

On Wednesday, Reuters reporter Joseph Menn published an account of US Senator Ron Wyden’s efforts to determine whether the NSA is still in the business of placing backdoors in US technology products.

Wyden (D-OR) opposes such efforts because, as the Juniper incident demonstrates, they can backfire, thereby harming national security, and because they diminish the appeal of American-made tech products.

But Wyden’s inquiries, as a member of the Senate Intelligence Committee, have been stymied by lack of cooperation from the spy agency and the private sector. In June, Wyden and various colleagues sent a letter to Juniper CEO Rami Rahim asking about “several likely backdoors in its NetScreen line of firewalls.”

Juniper acknowledged in 2015 that “unauthorized code” had been found in ScreenOS, which powers its NetScreen firewalls. It’s been suggested that the code was in place since around 2008.

The Reuters report, citing a previously undisclosed statement to Congress from Juniper, claims that the networking biz acknowledged that “an unnamed national government had converted the mechanism first created by the NSA.”

Wyden staffers in 2018 were told by the NSA that a “lessons learned” report about the incident had been written. But Wyden spokesperson Keith Chu told Reuters that the NSA now claims it can’t find the file. Wyden’s office did not immediately respond to a request for comment.

The reason this malicious code was able to decrypt ScreenOS VPN connections has been attributed to Juniper’s “decision to use the NSA-designed Dual EC Pseudorandom Number Generator.”

[…]

After Snowden’s disclosures about the extent of US surveillance operations in 2013, the NSA is said to have revised its policies for compromising commercial products. Wyden and other lawmakers have tried to learn more about these policies but they’ve been stonewalled, according to Reuters.

[…]

Source: NSA: We’ve learned our lesson after foreign spies used one of our crypto backdoors – but we can’t say how exactly • The Register

And this is why you don’t put out insecure security products, which is exactly what products with a backdoor are. Here’s looking at you, UK and Australia and all the other countries trying to force insecure products on us.

Researchers develop new atomic layer deposition process

A new way to deposit thin layers of atoms as a coating onto a substrate material at near room temperatures has been invented at The University of Alabama in Huntsville (UAH), a part of the University of Alabama System.

UAH postdoctoral research associate Dr. Moonhyung Jang got the idea to use an ultrasonic atomization technology to evaporate chemicals used in (ALD) while shopping for a home humidifier.

Dr. Jang works in the laboratory of Dr. Yu Lei, an associate professor in the Department of Chemical Engineering. The pair have published a paper on their invention that has been selected as an editor’s pick in the Journal of Vacuum Science & Technology A.

“ALD is a three-dimensional thin film deposition technique that plays an important role in microelectronics manufacturing, in producing items such as central processing units, memory and hard drives,” says Dr. Lei.

Each ALD cycle deposits a layer a few atoms deep. An ALD process repeats the deposition cycle hundreds or thousands of times. The uniformity of the thin films relies on a surface self-limiting reaction between the chemical vapor and the substrates.

“ALD offers exceptional control of nanometer features while depositing materials uniformly on large silicon wafers for high volume manufacturing,” Dr. Lei says. “It is a key technique to produce powerful and small smart devices.”

[…]

“In the past, many reactive chemicals were considered not suitable for ALD because of their low vapor pressure and because they are thermally unstable,” says Dr. Lei. “Our research found that the ultrasonic atomizer technique enabled evaporating the reactive chemicals at as low as room temperature.”

The UAH scientists’ ultrasound invention makes it possible to use a wide range of reactive chemicals that are thermally unstable and not suitable for direct heating.

“Ultrasonic atomization, as developed by our research group, supplies low vapor pressure precursors because the evaporation of precursors was made through ultrasonic vibrating of the module,” Dr. Lei says.

“Like the household humidifier, ultrasonic atomization generates a mist consisting of saturated vapor and micro-sized droplets,” he says. “The micro-sized droplets continuously evaporate when the mist is delivered to the substrates by a carrier gas.”

The process uses a piezo-electric ultrasonic transducer placed in a liquid chemical precursor. Once started, the transducer starts to vibrate a few hundred thousand times per second and generates a mist of the chemical precursor. The small liquid droplets in the mist are quickly evaporated in the gas manifold under vacuum and mild heat treatment, leaving behind an even coat of the deposition material.

Source: Researchers develop new atomic layer deposition process

Water on the Moon: Research unveils its type and abundance – boosting exploration plans

“Water” has since been detected inside the minerals in lunar rocks. Water ice has also been discovered to be mixed in with lunar dust grains in cold, permanently shadowed regions near the lunar poles.

But scientists haven’t been sure how much of this water is present as “molecular water”—made up of two parts hydrogen and one part oxygen (H2O). Now two new studies published in Nature Astronomy provide an answer, while also giving an idea of how and where to extract it.

Source: Water on the Moon: Research unveils its type and abundance – boosting exploration plans

Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA. 1 EULA? 2 WTF?

Palo Alto Networks has threatened a startup with legal action after the smaller biz published a comparison review of one of its products.

Israel-based Orca Security received a cease-and-desist letter from a lawyer representing Palo Alto after Orca uploaded a series of online videos reviewing of one of Palo Alto’s products and compared it to its own. Orca sees itself as a competitor of Palo Alto Networks (PAN).

“What we expected is that others will also create such materials … but instead we received a letter from Palo Alto’s lawyers claiming we were not allowed to do that,” Orca chief exec Avi Shua told The Register this week. “We believe these are empty legal threats.”

In a note on its website, Orca lamented at length the “outrageous” behavior of PAN, as well as posting a copy of the lawyer’s letter for world-plus-dog to read. That letter claimed Orca infringed PAN’s trademarks by using its name and logo in the review as well as breaching non-review clauses in the End-User License Agreement (EULA) of PAN’s product.

As such, the lawyer demanded the removal of the comparison material, and that the startup stop using PAN’s logo and name. We note the videos are still online, hosted by YouTube.

“It’s outrageous that the world’s largest cybersecurity vendor, its products being used by over 65,000 organizations according to its website, believes that its users aren’t entitled to share any benchmark or performance comparison of its products,” said Orca.

The lawyer’s letter [PDF] claimed Orca violated PAN’s EULA fine-print, something deputy general counsel Melinda Thompson described in her missive as “a clear breach” of terms “prohibiting an end user from disclosing, publishing or otherwise making publicly available any benchmark, performance or comparison tests… run on Palo Alto Networks products, in whole or in part.”

Shua told The Register Orca tried to give its rival a fair crack of the whip: “Even if we tried to be objective, we would have some biases. But we did try to do it as objectively as possible, by showing it to users: creating labs, screenshots, and showing how it looks like.” The fairness of the review, we note, is not what is at issue here: PAN forbids any kind of benchmarking and comparison of its gear.

Palo Alto networks declined to comment when contacted by The Register.

Source: Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA • The Register

1 Who reads EULAs anyway? Are they in any way, shape or form defensible apart from maybe some ant fucker friendless lawyers?

2 Is PAN so very worried about the poor quality of their product that they feel they want to kill any and all benchmarks / comparisons?

Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – yes, copyright really doesn’t provide for  innovation at all

“It’s finally happening: Twitch is taking action against copyrighted music — long a norm among streamers — in response to music industry pressure,” reports Kotaku.

But the Verge reports “there’s some funny stuff going on here.” First, Twitch is telling streamers that some of their content has been identified as violating copyright and that instead of letting streamers file counterclaims, it’s deleting the content; second, the company is telling streamers it’s giving them warnings, as opposed to outright copyright strikes…

Weirdly Twitch decided to bulk delete infringing material instead of allowing streamers to archive their content or submit counterclaims. To me, that suggests that there are tons of infringements, and that Twitch needed to act very quickly and/or face a lawsuit it wouldn’t be able to win over its adherence to the safe harbor provision of the DMCA.
The email Twitch sent to their users “encourages them to delete additional content — up to and including using a new tool to unilaterally delete all previous clips,” reports Kotaku. One business streamer complains that it’s “insane” that Twitch basically informs them “that there is more content in violation despite having no identification system to find out what it is. Their solution to DMCA is for creators to delete their life’s work. This is pure, gross negligence.”

Or, as esports consultant Rod “Slasher” Breslau puts it, “It is absolutely insane that record labels have put Twitch in a position to force streamers to delete their entire life’s work, for some 10+ years of memories, and that Twitch has been incapable of preventing or aiding streamers for this situation. a total failure all around.”

Twitch’s response? It is crucial that we protect the rights of songwriters, artists and other music industry partners. We continue to develop tools and resources to further educate our creators and empower them with more control over their content while partnering with industry-recognized vendors in the copyright space to help us achieve these goals.

Source: Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – Slashdot

Of course, the money raised by these music companies doesn’t really go to the artists much – it’s basically swallowed up by the music companies themselves.

Samsung, Stanford make a 10,000PPI display that could lead to ‘flawless’ VR

Ask VR fans about their gripes and they’ll likely mention the “screen door” effect, or the gaps between pixels that you notice when looking at a display so close to your eyes. That annoyance might disappear entirely if Samsung and Stanford University have their way. They’ve developed (via IEEE Spectrum) OLED technology that supports resolutions up to 10,000 pixels per inch — well above what you see in virtually any existing display, let alone what you’d find in a modern VR headset like the Oculus Quest 2.

The newOLED tech uses films to emit white light between reflective layers, one silver and another made of reflective metal with nano-sized corrugations. This “optical metasurface” changes the reflective properties and allows specific colors to resonate through pixels. The design allows for much higher pixel densities than you see in the RGB OLEDs on phones, but doesn’t hurt brightness to the degree you see with white OLEDs in some TVs.

This would be ideal for VR and AR, creating a virtually ‘flawless’ image where you can’t see the screen door effect or even individual pixels. This might take years to arrive when it would require much more computing power, but OLED tech would no longer be an obstacle.

It’s also more practical than you might think. Samsung is already working on a “full-size” display using the 10,000PPI tech, and the design of the corrugations makes large-scale manufacturing viable. It may just be a question of when and where you see this OLED rather than “if.”

Source: Samsung, Stanford make a 10,000PPI display that could lead to ‘flawless’ VR | Engadget

About 3% of Starlink satellites have failed so far – that’s 360 potential collisions now and 1,260 once SL is up

To date, the company has launched over 800 satellites and (as of this summer) is producing them at a rate of about 120 a month. There are even plans to have a constellation of 42,000 satellites in orbit before the decade is out.

However, there have been some problems along the way, as well. Aside from the usual concerns about and radio frequency interference (RFI), there is also the rate of failure these satellites have experienced. Specifically, about 3% of its satellites have proven to be unresponsive and are no longer maneuvering in , which could prove hazardous to other satellites and spacecraft in orbit.

In order to prevent collisions in orbit, SpaceX equips its satellites with krypton Hall-effect thrusters (ion engines) to raise their orbit, maneuver in space and deorbit at the end of their lives. However, according to two recent notices SpaceX issued to the Federal Communications Commission (FCC) over the summer (mid-May and late June), several of their satellites have lost maneuvering capability since they were deployed.

Unfortunately, the company did not provide enough information to indicate which of their satellites were affected. For this reason, astrophysicist Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics (CfA) and the Chandra X-ray Center presented his own analysis of the satellites’ orbital behavior to suggest which satellites have failed.

The analysis was posted on McDowell’s website (Jonathan’s Space Report), where he combined SpaceX’s own data with U.S. government sources. From this, he determined that about 3% of satellites in the constellation have failed because they are no longer responding to commands. Naturally, some level of attrition is inevitable, and 3% is relatively low as failure rates go.

But every that is incapable of maneuvering due to problems with its communications or its propulsion system creates a collision hazard for other satellites and spacecraft. As McDowell told Business Insider:

Artist’s impression of the orbital debris problem. Credit: UC3M

“I would say their failure rate is not egregious. It’s not worse than anybody else’s failure rates. The concern is that even a normal failure rate in such a huge constellation is going to end up with a lot of bad space junk.”

Kessler syndrome

Named after NASA scientists Donald J. Kessler, who first proposed it in 1978, Kessler syndrome refers to the threat posed by collisions in orbit. These lead to catastrophic breakups that create more debris that will lead to further collisions and breakups, and so on. When one takes into account rates of failure and SpaceX’s long-term plans for a “megaconstellation,” this syndrome naturally rears its ugly head.

Not long ago, SpaceX secured permission from the Federal Communications Commission (FCC) to deploy about 12,000 Starlink satellites to orbits ranging from 328 km to 580 km (200 to 360 mi). However, more recent filings with the International Telecommunications Union (ITU) show that the company hopes to create a megaconstellation of as many as 42,000 satellites.

In this case, a 3% failure rate works out to 360 and 1,260 (respectively) 250 kg (550 lbs) satellites becoming defunct over time. As of February of 2020, according to the ESA’s Space Debris Office (SDO), there are currently 5,500 satellites in orbit of Earth—around 2,300 of which are still operational. That means (employing naked math) that a full Starlink megaconstellation would increase the number of non-functioning satellites in orbit by 11% to 40%.

The problem of debris and collisions looks even more threatening when you consider the amount of debris in orbit. Beyond non-functioning satellites, the SDO also estimates that there are currently 34,000 objects in orbit measuring more than 10 cm (~4 inches) in diameter, 900,000 objects between 1 cm to 10 cm (0.4 to 4 in), and 128 million objects between 1 mm to 1 cm.

Source: About 3% of Starlink satellites have failed so far

Well done yet again mr Elon Musk

Oculus owners forced on Facebook accounts, will have purchases be wiped, device bricked, if they ever leave FB. Who would have guessed?

Oculus users, already fuming at Facebook chaining their VR headsets to their Facebook accounts, have been warned they could lose all their Oculus purchases and account information in future if they ever delete their profile on the social network.

The rule is a further binding of the gaming company that Facebook bought in 2014 to the mothership, and comes just two months after Facebook decided all new Oculus users require Facebook accounts to use their VR gizmos, and all current Oculus users will need a Facebook account by 2023. Failure to do so may cause apps installed on the headsets to no longer work as expected.

The decision to cement together what many users see as largely unrelated activities – playing video games and social media posts – has led to a wave of anger among Oculus users, and a renewed online effort to jailbreak new Oculus headgear to bypass Facebook’s growing restrictions.

That outrage was fueled when Facebook initially said that if people attempted to connect more than one Oculus headset to a single Facebook account, something families in particular want to do as it avoids having to install the same app over and over, it would ban them from the service.

Facebook has since dropped that threat, and said it is working on allowing multiple devices and accounts to connect. But the control-freak instincts of the internet giant were yet again on full display, something that was noted by the man who first drew attention to Oculus’s new terms and conditions, CEO of fitness gaming company Yur, Cix Liv.

“My favorite line is ‘While I do completely understand your concerns, we do need to have you comply with the Facebook terms of service’ like Facebook thinks they are some authoritarian government,” he tweeted.

[,,,]

Source: Oculus owners told not only to get Facebook accounts, purchases will be wiped if they ever leave social network • The Register

LG’s rollable OLED TV goes on sale for $87,000

After years of teasing, LG is finally selling a rollable OLED TV. The RX-branded Signature OLED R launched in South Korea today, offering a 65-inch 4K display that tucks away into its base at the press of a button. Besides being able to hide completely, as LG has promised in CES previews, the TV has different settings (Full View, Line View and Zero View) for different situations.

Source: LG’s rollable OLED TV goes on sale for $87,000 | Engadget

The Department of Justice sues Google over antitrust concerns | Engadget

We all knew it was coming. Today, the US government’s Department of Justice filed an antitrust lawsuit against Google. The company, which is a part of Alphabet, is accused of having an unfair monopoly over search and search-related advertising. In addition, the department disagrees with the terms around Android, the most widely-used mobile operating system, that forces phone manufacturers to pre-load Google applications and set Google as the default search engine. That decision stops rival search providers from gaining traction and, as a consequence, ensures that Google continues to make enormous amounts of cash via search-related advertising.

“Google pays billions of dollars each year to distributors—including popular-device manufacturers such as Apple, LG, Motorola, and Samsung; major U.S. wireless carriers such as AT&T, T-Mobile, and Verizon; and browser developers such as Mozilla, Opera, and UCWeb— to secure default status for its general search engine and, in many cases, to specifically prohibit Google’s counterparties from dealing with Google’s competitors,” the lawsuit filing reads.

[…]

Walker also argued that Google competes with platforms such as Twitter, Expedia and OpenTable, which let you search for news, flights and restaurant reservations respectively.” Every day, Americans choose to use all these services and thousands more,” he said.

Some of Google’s rivals feel differently. “We’re pleased the DOJ has taken this key step in holding Google accountable for the ways it has blocked competition, locked people into using its products, and achieved a market position so dominant they refuse to even talk about it out loud,” Gabriel Weinberg, CEO of search engine provider DuckDuckGo said in a Twitter thread. “While Google’s anti-competitive practices hurt companies like us, the negative impact on society and democracy wrought by their surveillance business model is far worse. People should be able to opt out in one click.”

As the Wall Street Journal explains, the Justice Department has been preparing to launch this case for over a year. “Over the course of the last 16 months, the Antitrust Division collected convincing evidence that Google no longer competes only on the merits but instead uses its monopoly power – and billions in monopoly profits – to lock up key pathways to search on mobile phones, browsers, and next generation devices, depriving rivals of distribution and scale,” the Department said in a statement today.

[…]

Today’s lawsuit is arguably the biggest antitrust move since the government’s case against Microsoft in 1998. Back then, the technology company was accused of using its Windows monopoly to push Microsoft-made software such as Internet Explorer. A judge eventually ordered Microsoft to break up into two separate companies. The technology giant appealed, however, and by the end of 2001 it had reached a settlement with the department. “Back then, Google claimed Microsoft’s practices were anticompetitive, and yet, now, Google deploys the same playbook to sustain its own monopolies,” the Justice Department argues in today’s lawsuit filing.

Source: The Department of Justice sues Google over antitrust concerns | Engadget

The timing of this is not coincidental. The DoJ was apparently pushed into this before it was ready in order to look good for the elections.

I have been talking about this since early 2019 and it’s great to see how this has been gaining traction since then


 

eBay makes a dedicated portal for officially refurbished gear

eBay is taking on Amazon Warehouse with a new destination called Certified Refurbished, selling used goods from brands like Lenovo, Microsoft and Makita. The idea is that you can buy second-hand products at significant discounts over new, but still get a two-year warranty (from Allstate), a money-back guarantee and 30-day “hassle-free” returns, along with new accessories, manuals and manufacturer-sealed packaging.

eBay’s Certified Refurbished has five priority categories: laptops, portable audio,power tools, small kitchen appliances and vacuums. It offers several brand exclusives, including De’Longhi, Dirt Devil, Hoover, Makita and Philips, along with inventory exclusives from Dewalt, iRobot and Skullcandy. It’s also selling products from participating brands including Dell, Acer, Bissel, Black & Decker, Cuisinart, KitchenAid, Lenovo, Microsoft, Miele and Sennheiser.

To make the cut, manufacturers must offer items in “pristine, like-new condition that has been professionally inspected, cleaned, and refurbished by the manufacturer, or a manufacturer-approved vendor,” according to eBay. It also must be in new packaging with original or new accessories.

Source: eBay makes a dedicated portal for officially refurbished gear | Engadget

Climate change and flying: what share of global CO2 emissions come from aviation?

Flying is a highly controversial topic in climate debates. There are a few reasons for this.

The first is the disconnect between its role in our personal and collective carbon emissions. Air travel dominates frequent travellers’ individual contributions to climate change. Yet aviation overall accounts for only 2.5% of global carbon dioxide (CO2) emissions. This is because there are large inequalities in how much people fly – many do not, or cannot afford to, fly at all [best estimates put this figure at around 80% of the world population – we will look at this in more detail in an upcoming article].

The second is how aviation emissions are attributed to countries. CO2 emissions from domestic flights are counted in a country’s emission accounts. International flights are not – instead they are counted as their own category: ‘bunker fuels’. The fact that they don’t count towards the emissions of any country means there are few incentives for countries to reduce them.

It’s also important to note that unlike the most common greenhouse gases – carbon dioxide, methane or nitrous oxide – non-CO2 forcings from aviation are not included in the Paris Agreement. This means they could be easily overlooked – especially since international aviation is not counted within any country’s emissions inventories or targets.

How much of a role does aviation play in global emissions and climate change? In this article we take a look at the key numbers that are useful to know.

Global aviation (including domestic and international; passenger and freight) accounts for:

  • 1.9% of greenhouse gas emissions (which includes all greenhouse gases, not only CO2)
  • 2.5% of CO2 emissions
  • 3.5% of ‘radiative forcing’. Radiative forcing measures the difference between incoming energy and the energy radiated back to space. If more energy is absorbed than radiated, the atmosphere becomes warmer.

The latter two numbers refer to 2018, and the first to 2016, the latest year for which such data are available.


Aviation accounts for 2.5% of global CO2 emissions

As we will see later in this article, there are a number of processes by which aviation contributes to climate change. But the one that gets the most attention is its contribution via CO2 emissions. Most flights are powered by jet gasoline – although some partially run on biofuels – which is converted to CO2 when burned.

In a recent paper, researchers – David Lee and colleagues – reconstructed annual CO2 emissions from global aviation dating back to 1940. This was calculated based on fuel consumption data from the International Energy Agency (IEA), and earlier estimates from Robert Sausen and Ulrich Schumann (2000).

The time series of global emissions from aviation since 1940 is shown in the accompanying chart. In 2018, it’s estimated that global aviation – which includes both passenger and freight – emitted 1.04 billion tonnes of CO2.

This represented 2.5% of total CO2 emissions in 2018.,

Aviation emissions have doubled since the mid-1980s. But, they’ve been growing at a similar rate as total CO2 emissions – this means its share of global emissions has been relatively stable: in the range of 2% to 2.5%.

Global co2 emissions from aviation

Non-CO2 climate impacts mean aviation accounts for 3.5% of global warming

Aviation accounts for around 2.5% of global CO2 emissions, but it’s overall contribution to climate change is higher. This is because air travel does not only emit CO2: it affects the climate in a number of more complex ways.

As well as emitting CO2 from burning fuel, planes affect the concentration of other gases and pollutants in the atmosphere. They result in a short-term decrease, but long-term increase in ozone (O3); a decrease in methane (CH4); emissions of water vapour; soot; sulfur aerosols; and water contrails. While some of these impacts result in warming, while others induce a cooling effect. Overall, the warming effect is stronger.

David Lee et al. (2020) quantified the overall effect of aviation on global warming when all of these impacts were included. To do this they calculated the so-called ‘Radiative Forcing’. Radiative forcing measures the difference between incoming energy and the energy radiated back to space. If more energy is absorbed than radiated, the atmosphere becomes warmer.

In this chart we see their estimates for the radiative forcing of the different elements. When we combine them, aviation accounts for approximately 3.5% of net radiative forcing: that is, 3.5% of warming.

Although COgets most of the attention, it accounts for less than half of this warming. Two-thirds (66%) comes from non-CO2 forcings. Contrails – water vapor trails from aircraft exhausts – account for the largest share.

We don’t yet have the technologies to decarbonize air travel

Aviation’s contribution to climate change – 3.5% of warming, or 2.5% of CO2 emissions – is often less than people think. It’s currently a relatively small chunk of emissions compared to other sectors.

The key challenge is that it is particularly hard to decarbonize. We have solutions to reduce emissions for many of the largest emitters – such as power or road transport – and it’s now a matter of scaling them. We can deploy renewable and nuclear energy technologies, and transition to electric cars. But we don’t have proven solutions to tackle aviation yet.

There are some design concepts emerging – Airbus, for example, have announced plans to have the first zero-emission aircraft by 2035, using hydrogen fuel cells. Electric planes may be a viable concept, but are likely to be limited to very small aircraft due to the limitations of battery technologies and capacity.

Innovative solutions may be on the horizon, but they’re likely to be far in the distance.

Appendix: Efficiency improvements means air traffic has increased more rapidly than emissions

Global emissions from aviation have increased a lot over the past half-century. However, air travel volumes increased even more rapidly.

Since 1950, aviation emissions increased almost seven-fold; since 1960 they’ve tripled. Air traffic volume – here defined as revenue passenger kilometers (RPK) traveled – increased by orders of magnitude more: almost 300-fold since 1950; and 75-fold since 1960 [you find this data in our interactive chart here].

The much slower growth in emissions means aviation efficiency has seen massive improvements. In the chart we show both the increase in global airline traffic since 1950, and aviation efficiency, measured as the quantity of CO2 emitted per revenue passenger kilometer traveled. In 2018, approximately 125 grams of CO2  were emitted per RPK. In 1960, this was eleven-fold higher; in 1950 it was twenty-fold higher. Aviation has seen massive efficiency improvements over the past 50 years.

These improvements have come from several sources: improvements in the design and technology of aircraft; larger aircraft sizes (allowing for more passengers per flight); and an increase in how ‘full’ passenger flights are. This last metric is termed the ‘passenger load factor’. The passenger load factor measures the actual number of kilometers traveled by paying customers (RPK) as a percentage of the available seat kilometers (ASK) – the kilometers traveled if every plane was full. If every plane was full the passenger load factor would be 100%. If only three-quarters of the seats were filled, it would be 75%.

The global passenger load factor increased from 61% in 1950 to 82% in 2018 [you find this data in our interactive chart here].

Source: Climate change and flying: what share of global CO2 emissions come from aviation? – Our World in Data

When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube

Google exempts its own websites from Chrome’s automatic data-scrubbing feature, allowing the ads giant to potentially track you even when you’ve told it not to.

Programmer Jeff Johnson noticed the unusual behavior, and this month documented the issue with screenshots. In his assessment of the situation, he noted that if you set up Chrome, on desktop at least, to automatically delete all cookies and so-called site data when you quit the browser, it deletes it all as expected – except your site data for Google.com and YouTube.com.

While cookies are typically used to identify you and store some of your online preferences when visiting websites, site data is on another level: it includes, among other things, a storage database in which a site can store personal information about you, on your computer, that can be accessed again by the site the next time you visit. Thus, while your Google and YouTube cookies may be wiped by Chrome, their site data remains on your computer, and it could, in future, be used to identify you.

Johnson noted that after he configured Chrome to wipe all cookies and site data when the application closed, everything was cleared as expected for sites like apple.com. Yet, the main Google search site and video service YouTube were allowed to keep their site data, though the cookies were gone. If Google chooses at some point to stash the equivalent of your Google cookies in the Google.com site data storage, they could be retrieved next time you visit Google, and identify you, even though you thought you’d told Chrome not to let that happen.

Ultimately, it potentially allows Google, and only Google, to continue tracking Chrome users who opted for some more privacy; something that is enormously valuable to the internet goliath in delivering ads. Many users set Chrome to automatically delete cookies-and-site-data on exit for that reason – to prevent being stalked around the web – even though it often requires them to log back into websites the next time they visit due to their per-session cookies being wiped.

Yet Google appears to have granted itself an exception. The situation recalls a similar issue over location tracking, where Google continued to track people’s location through their apps even when users actively selected the option to prevent that. Google had put the real option to start location tracking under a different setting that didn’t even include the word “location.”

In this case, “Clear cookies and site data when you quit Chrome” doesn’t actually mean what it says, at least not for Google.

There is a workaround: you can manually add “Google.com” and “YouTube.com” within the browser to a list of “Sites that can never use cookies.” In that case, no information, not even site data, is saved from those sites, which is all in all a little confusing.

[…]

 

Source: When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube • The Register

Announcing: Graph-Native Machine Learning in Neo4j!

We’re delighted to announce you can now take advantage of graph-native machine learning (ML) inside of Neo4j! We’ve just released a preview of Neo4j’s Graph Data Science™ Library version 1.4, which includes graph embeddings and an ML model catalog.

Together, these enable you to create representations of your graph and make graph predictions – all within Neo4j.

[…]

Graph Embeddings

The graph embedding algorithms are the star of the show in this release.

These algorithms are used to transform the topology and features of your graph into fixed-length vectors (or embeddings) that uniquely represent each node.

Graph embeddings are powerful, because they preserve the key features of the graph while reducing dimensionality in a way that can be decoded. This means you can capture the complexity and structure of your graph and transform it for use in various ML predictions.

 

Graph embeddings capture the nuances of graphs in a way that can be used to make predictions or lower dimensional visualizations.

In this release, we are offering three embedding options that learn the graph topology and, in some cases, node properties to calculate more accurate representations:

Node2Vec:

    • This is probably the most well-known graph embedding algorithm. It uses random walks to sample a graph, and a neural network to learn the best representation of each node.

FastRP:

    • A more recent graph embedding algorithm that uses linear algebra to project a graph into lower dimensional space. In GDS 1.4, we’ve extended the original implementation to support node features and directionality as well.
    • FastRP is up to 75,000 times faster than Node2Vec, while providing equivalent accuracy!

GraphSAGE:

    • This is an embedding technique using inductive representation learning on graphs, via graph convolutional neural networks, where the graph is sampled to learn a function that can predict embeddings (rather than learning embeddings directly). This means you can learn on a subset of your graph and use that representative function for new data and make continuous predictions as your graph updates. (Wow!)
    • If you’d like a deeper dive into how it works, check out the GraphSAGE session from the NODES event.

 

 

Graph embeddings available in the Neo4j Graph Data Science Library v1.4 . The caution marks indicate that, while directions are supported, our internal benchmarks don’t show performance improvements.

Graph ML Model Catalog

GraphSAGE trains a model to predict node embeddings for unseen parts of the graph, or new data as mentioned above.

To really capitalize on what GraphSAGE can do, we needed to add a catalog to be able to store and reference these predictive models. This model catalog lives in the Neo4j analytics workspace and contains versioning information (what data was this trained on?), time stamps and, of course, the model names.

When you want to use a model, you can provide the name of the model to GraphSAGE, along with the named graph you want to apply it to.

 

GraphSAGE ML Models are stored in the Neo4j analytics workspace.

[…]

Source: Announcing: Graph-Native Machine Learning in Neo4j!

Amazon’s Stops Pretending and launches anticompetitive New Panel Program

After spending years promising Congress that the data it collected from third-party sellers wasn’t used to beef up its private-label products, today Amazon decided to roll out a product meant to do exactly that. The Amazon Shopper Panel, as it’s called, promises to pay Amazon customers that offer intel to the ecommerce giant about where they shop when they’re not shopping on Amazon dot com.

Here’s how the Shopper Panel works: After getting an IRL or e-receipt from any business that isn’t owned by Amazon (so Whole Foods or Four Star locations are not eligible), panelists can either submit a picture of that receipt through the app, or in the case of digital copies, forward their emailed details to a panel-specific email address. According to the Panel website, folks that upload “at least” 10 receipts per month can either cash that in for $10 in Amazon credit or $10 donated to their charity of choice. Along with that baseline payout, the app will also dole out additional earnings to panelists who answer the occasional survey about certain brands or products within the app.

Not every receipt counts toward this program. Per Amazon, receipts from grocery stores, drug stores, restaurants, and movie theaters—along with just about any other “retailer” or “entertainment outlet”—are fair game. Receipts from casinos, gun stores, transit fare, tuition or apartment rentals aren’t.

While the program is invite-only for now, any curious Amazon customer based in the U.S. can download the Panel app from the iOS App Store or the Google Play Store if they want to put their name on the waitlist.

[…]

nder the Amazon Panel site’s “Privacy” tab, the company notes that any receipts that you share will go toward “[helping] brands offer better products and [making] ads more relevant on Amazon.” The company also notes any data gleaned from these receipts or surveys might also be used to “ improve the product selection on Amazon.com and affiliate stores such as Whole Foods Market,” and to “improve the content offered through Amazon services such as Prime Video.”

That’s why this rollout is a particularly gutsy move for Amazon to take right now. Recent months have seen the company come under an increasing barrage of regulatory fire from authorities both in the U.S. and in Europe over a scandal that largely revolved around tracking consumers’ purchase data—not unlike the data pulled from the average receipt—on its platform. This past spring, an investigation from the Wall Street Journal revealed that Amazon had spent years surveilling the purchases earned by the platform’s third party sellers specifically to create its own competing products under the Amazon private label. This story came out barely a year after Amazon’s associate general council, Nate Sutton, told Congress that the company didn’t use “individual seller data” to do just that.

[…]

Source: Amazon’s New Panel Program Is An Anticompetitive Nightmare

Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done – and does

Never mind the Feds. American police forces routinely “circumvent most security features” in smartphones to extract mountains of personal information, according to a report that details the massive, ubiquitous cracking of devices by cops.

Two years of public records requests by Upturn, a Washington DC non-profit, has revealed that every one of the United States’ largest 50 police departments, as well as half of the largest sheriff’s offices and two-thirds of the largest prosecuting attorney’s offices, regularly use specialist hardware and software to access the contents of suspects’ handhelds. There isn’t a state in the Union that hasn’t got some advanced phone-cracking capabilities.

The report concludes that, far from modern phones being a bastion of privacy and security, there are in fact routinely rifled through for trivial crimes without a warrant in sight. In one case, the cops confiscated and searched the phones of two men who were caught arguing over a $70 debt in a McDonalds.

In another, officers witnessed “suspicious behavior” in a Whole Foods grocery store parking lot and claimed to have smelt “the odor of marijuana” coming from a car. The car was stopped and searched, and the driver’s phone was seized and searched for “further evidence of the nature of the suspected controlled substance exchange.”

A third example given saw police officers shot and kill a man after he “ran from the driver’s side of the vehicle” during a traffic stop. They apparently discovered a small orange prescription pill container next to the victim, and tested the pills, which contained acetaminophen and fentanyl. They also discovered a phone in the empty car, and searched it for evidence related to “counterfeit Oxycodone” and “evidence relating to… motives for fleeing from the police.”

The report gives numerous other examples of phones taken from their owners and searched for evidence, without a warrant – many in cases where the value of the information was negligible such as cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, and public intoxication.

Not what you imagined

That is a completely different picture to the one, we imagine, most Americans assumed, particularly given the high legal protections afforded smartphones in recent high-profile court cases.

In 2018, the Supreme Court ruled that the government needs a warrant to access its citizens’ cellphone location data and talked extensively about a citizen’s expectation of privacy limiting “official intrusion” when it comes to smartphones.

In 2014, the court decided a warrant was required to search a mobile phone, and that the “reasonable expectation of privacy” that people have in their “physical movements” should extend to records stored by third parties. But the reality on the grounds is that those grand words mean nothing if the cops decide they want to look through your phone.

The report was based on reports from 44 law enforcement agencies across the US and covered 50,000 extractions of data from cellphones between 2015 and 2019, a figure that Upturn notes “represents a severe undercount” of the actual number of cellphone extractions.

[…]

They include banning the use of “consent searches” where the police ask the owner if they can search their phone and then require no further approval to go through a device. “Courts pretend that ‘consent searches’ are voluntary, when they are effectively coerced,” the report argues and notes that most people are probably unaware they by agreeing to it, they can have their phone’s entire contents downloaded and perused at will later on.

It also reckons that the argument that the contents of a phone are in “plain view” because a police officer can see a phone when at the scene of a crime, an important legal distinction that allows the police to search phones, is legally untenable because people carry their phones with them as a rule, and the contents are not themselves also visible – only the device itself.

The report also argues for more extensive audit logs of phone searches so there is a degree of accountability, particularly if evidence turned up is later used in court. And it argues for better and clearer data deletion rules, as well as more reporting requirements around phone searches by law enforcement.

It concludes: “For too long, public debate and discussion regarding these tools has been abstracted to the rarest and most sensational cases in which law enforcement cannot gain access to cellphone data. We hope that this report will help recenter the conversation regarding law enforcement’s use of mobile device forensic tools to the on-the-ground reality of cellphone searches today in the United States.”

Source: Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done • The Register

Do algorithms make us even more radical? Filter bubbles and echo chambers

‘Technology ensures that we’re all served our own personalised news cycle. As a result, we only get to hear the opinions that correspond to our own. The result is polarisation’. Or so the oft-heard theory goes. But in practice, it seems this isn’t really true, or at least not for the average Dutch person. However, according to communication scientist Judith Möller, the influence of filter bubbles, as they are known, could indeed be stronger when it comes to groups with radical opinions.

Judith Möller: ‘My theory is that filter bubbles do indeed exist, but that we’re looking for them in the wrong place.’

First of all, we need to differentiate between the so-called echo chamber and the filter bubble. As an individual, you voluntarily take your place in an echo chamber (such as in the form of a forum, or a Facebook or WhatsApp group), meaning you surround yourself with people who tend towards the same opinion as yourself. ‘Call it the modern form of compartmentalisation’, says communication scientist Judith Möller, who recently received a Veni grant for her research. ‘People have always had the tendency to surround themselves with like-minded people, and that’s no different on social media.’

Various news sources in parallel prevent a filter bubble

In the filter bubble, you are presented only with news and opinions that match you as an individual, on the basis of algorithms and without you being aware of this process. It’s said that this bubble is leading to the polarisation of society. Everyone is constantly exposed to ‘their own truth’, while other news gets filtered out. But Möller says that there is no evidence to support this, at least in the Netherlands. ‘We use various news sources in parallel – meaning not only Facebook and Twitter, but also radio, television and newspapers, so we run little risk of ending up in a filter bubble. Besides that: the amount of “news” on an average Facebook timeline is less than 5%. Moreover, it turns out that many people on social media are actually more likely to encounter news that they normally wouldn’t read or search out, so that’s almost a bubble in reverse.’

Bubbles at the fringes of the opinion spectrum

Nonetheless, a great deal of money is being invested in the use of algorithms and artificial intelligence, such as during election periods. Möller: ‘So there must be something in it. My theory is that filter bubbles do indeed exist, but that we’re looking for them in the wrong place. We shouldn’t look at the mainstream, but at groups with radical and/or divergent opinions who don’t fit into the “centre”. This is where we see the formation of ‘fringe bubbles’, as I call them – filters at the edges of the opinion spectrum.’

People with fringe opinions can suddenly become very visible

From spiral of silence to spiral of noise

As one example, the researcher cites the anti-vaccination movement. ‘Previously, this group was confronted with the ‘spiral of silence’: if you said in public, for instance to friends or family, that you were sceptical about vaccination, you wouldn’t get a positive response. And so, you’d keep quiet about it. But this group found each other on social media, and as a consequence of filter technology, the proponents of this view encountered the ‘spiral of noise’: suddenly it seems as if a huge number of people agree with you.’

The news value of radical and divergent opinions

And so, it can happen that people with fringe, radical or divergent opinions suddenly become very vocal and visible. ‘Then they become newsworthy, they appear in normal news media and hence are able to address a wider public. The fringe bubble shifts towards the centre. This has been the case with the anti-vaccination movement, the climate sceptics and the yellow vests, but it also happened with the group who opposed the Dutch Intelligence and Security Services Act – no-one was interested initially, but in the end, it became major news and it even resulted in a referendum.’

Consequences can be both positive and negative

‘In my research I aim to go in search of divergent opinions like these, and then I’ll try to determine how algorithms influence radical groups, to what extent filter bubbles exist and why groups with radical opinions ultimately manage, or don’t manage, to appear in news media.’
The consequences of these processes can be both positive and negative, believes Möller. ‘Some people claim that this attention leads people from the “centre” to feel attracted to the fringe areas of society, in turn leading to more extreme opinions and a reduction in social cohesion, which is certainly possible. On the other hand, this process also brings advantages: after all, in a democracy we also need to listen to minority opinions.’

Source: Do algorithms make us even more radical? – University of Amsterdam

To find out how researchers track the filter bubble, read about fbtrex here (pdf)

Personalisation algorithms and elections: Breaking free of the filter bubble

In recent years, we have been witnessing a fundamental shift in the form how news and current affairs are disseminated and mediated. Due to the exponential increase in available content online and technological development in the field of recommendation systems, more and more citizens are informing themselves through customized and curated sources, while turning away from mass-mediated information sources like TV news and newspapers. Algorithmic recommendation systems provide news users with tools to navigate the information overload and identify important and relevant information. They do so by performing a task that was once a key part of the journalistic profession: keeping the gates. In a way, news recommendation algorithm can create highly individualized gates, through which only information and news fit that serves the user best. In theory, this is a great achievement that can make news exposure more efficient and interesting. In practice, there are many pitfalls when the power to select what we hear from the news shifts from professional editorial boards that select the news according to professional standards to opaque algorithms who are reigned by their own logic, the logic of advertisers or consumes personal preferences.

Beyond the filter bubble: Concepts, myths, evidence and issues for future debates

Filter bubbles in the Netherlands?

Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.

Should We Worry about Filter Bubbles?

Pop the filter bubble: Exposure Diversity as a Design Principle for Search and Social Media

Michael Bang Peterson and a few others from the US have some interesting counterpoints to this.

Source: New Research Shows Social Media Doesn’t Turn People Into Assholes (They Already Were), And Everyone’s Wrong About Echo Chambers

Trying to change the Dutch home copy tax

The representatives of producers and importers of consumer electronics (NLdigital, NLconnect, TechniekNederland, FIAR CE and STOBI) have asked Minister Dekker to take a closer look at the current home copying system. The trade associations believe that the protection of copyright through this regulation is increasingly out of step with the technical and economic reality.

NLdigital asks Minister Dekker to evaluate the current regulation for the private copying compensation as soon as possible and to revise it in time for the decision-making on tariffs for the period 2023-2026. Part of the evaluation is the question whether European legislation is still in line with the current media use of consumers, whether the total revenues from the private copying tax are still in proportion to the actual damage suffered and what the impact is of new techniques and forms of distribution now and the near future.

[…]
technology has eliminated the need for home copying. Making music or series available offline through online (music and video services) is not the same as an old-fashioned copy. The offline music or video is part of the contract with the provider.

In the worst case, you pay three times: once for the subscription to your streaming service, one charge on the smartphone with which you stream and then extra storage because that smartphone automatically makes a backup in the cloud. That has nothing to do with damage suffered. It seems to be looking for ways to continue to collect money while the damage is no longer as great as it used to be, the organizations say.

[…]
Under the leadership of NLconnect, those obliged to pay have turned against this proposal. After all, legally, this function does not involve a private copy: the package provider makes a master copy that is streamed to various subscribers. Rightholders can already exercise the prohibition right for this functionality and a private copying levy would therefore mean a double payment by the consumer. The parties obliged to pay conclude that SONT has rightly excluded the nPVR functionality from the decision.

 

Source: Weer ophef over Thuiskopievergoeding – Emerce