About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

RAWGraphs releases version 2

RAW Graphs is an open source data visualization framework built with the goal of making the visual representation of complex data easy for everyone.

Primarily conceived as a tool for designers and vis geeks, RAW Graphs aims at providing a missing link between spreadsheet applications (e.g. Microsoft Excel, Apple Numbers, OpenRefine) and vector graphics editors (e.g. Adobe Illustrator, Inkscape, Sketch).

The project, led and maintained by the DensityDesign Research Lab (Politecnico di Milano) was released publicly in 2013 and is regarded by many as one of the most important tools in the field of data visualization.

Source: About | RAWGraphs

Posted in Art

Hackers exploit websites to give them excellent SEO before deploying malware

According to Sophos, the so-called search engine “deoptimization” method includes both SEO tricks and the abuse of human psychology to push websites that have been compromised up Google’s rankings.

[…]

In a blog post on Monday, the cybersecurity team said the technique, dubbed “Gootloader,” involves deployment of the infection framework for the Gootkit Remote Access Trojan (RAT) which also delivers a variety of other malware payloads.

The use of SEO as a technique to deploy Gootkit RAT is not a small operation. The researchers estimate that a network of servers — 400, if not more — must be maintained at any given time for success.

[…]

Websites compromised by Gootloader are manipulated to answer specific search queries. Fake message boards are a constant theme in hacked websites observed by Sophos, in which “subtle” modifications are made to “rewrite how the contents of the website are presented to certain visitors.”

“If the right conditions are met (and there have been no previous visits to the website from the visitor’s IP address), the malicious code running server-side redraws the page to give the visitor the appearance that they have stumbled into a message board or blog comments area in which people are discussing precisely the same topic,” Sophos says.

If the attackers’ criteria aren’t met, the browser will display a seemingly-normal web page — that eventually dissolves into garbage text.

[…]

Victims who click on the direct download links will receive a .zip archive file, named in relation to the search term, that contains a .js file.

The .js file executes, runs in memory, and obfuscated code is then decrypted to call other payloads.

According to Sophos, the technique is being used to spread the Gootkit banking Trojan, Kronos, Cobalt Strike, and REvil ransomware, among other malware variants, in South Korea, Germany, France, and the United States.

“At several points, it’s possible for end-users to avoid the infection, if they recognize the signs,” the researchers say. “The problem is that, even trained people can easily be fooled by the chain of social engineering tricks Gootloader’s creators use. Script blockers like NoScript for Firefox could help a cautious web surfer remain safe by preventing the initial replacement of the hacked web page to happen, but not everyone uses those tools.”

[…]

Source: Hackers exploit websites to give them excellent SEO before deploying malware | ZDNet

ICANN Refuses to Accredit Pirate Bay Founder Peter Sunde Due to His ‘Background’

Peter Sunde was one of the key people behind The Pirate Bay in the early years, a role for which he was eventually convicted in Sweden.

While Sunde cut his ties with the notorious torrent site many years ago, he remains an active and vocal personality on the Internet.

[…]

Sunde is also involved with the domain registrar Sarek, which caters to technology enthusiasts and people who are interested in a fair and balanced Internet, promising low prices for domain registrations

As a business, everything was going well for Sarek. The company made several deals with domain registries to offer cheap domains but there is one element that’s missing. To resell the most popular domains, including .com and .org, it has to be accredited by ICANN.

ICANN is the main oversight body for the Internet’s global domain name system. Among other things, it develops policies for accredited registrars to prevent abuse and illegal use of domain names. Without this accreditation, reselling several popular domains simply isn’t an option.

ICANN Denies Accreditation

Sunde and the Sarek team hoped to overcome this hurdle and started the ICANN accreditation process in 2019. After a long period of waiting, the organization recently informed Sunde that his application was denied.

[…]

“After the background check I get a reply that I’ve checked the wrong boxes,” Sunde wrote. “Not only that, but they’re also upset I was wanted by Interpol.”

The Twitter thread didn’t go unnoticed by ICANN who contacted Sunde over the phone to offer clarification. As it turns out, the ‘wrong box’ issue isn’t the main problem, as he explains in a follow-up Twitter thread.

“I got some sort of semi-excuse regarding their claim that I lied on my application. They also said that they agreed it wasn’t fraud or similar really. So both of the points they made regarding the denial were not really the reason,” Sunde clarifies.

ICANN is Not Comfortable With Sunde

Over the phone, ICANN explained that the matter was discussed internally. This unnamed group of people concluded that the organization is ‘not comfortable’ doing business with him.

“They basically admitted that they don’t like me. They’ve banned me for nothing else than my political views. This is typical discrimination. Considering I have no one to appeal to except them, it’s concerning, since they control the actual fucking center of the internet.”

[…]

Making matters worse, ICANN will also keep the registration fee, so this whole ordeal is costing money as well.

Source: ICANN Refuses to Accredit Pirate Bay Founder Peter Sunde Due to His ‘Background’ * TorrentFreak

Yup. ICANN. It’s an autocracy run by no-one but themselves. This is clearly visible in their processes, which almost led to the whole .org TLD being sold off for massive profit (.org is not for profit!) to an ex board member.

SpaceX Mars prototype rocket nails landing for the first time – then explodes

SpaceX rocket prototype, known as SN10, soared over South Texas during test flight Wednesday before swooping down to a pinpoint landing near its launch site. Approximately three minutes after landing, however, multiple independent video feeds showed the rocket exploding on its landing pad.

SpaceX’s SN10, an early prototype of the company’s Starship Mars rocket, took off around 5:15 pm CT and climbed about six miles over the coastal landscape, mimicking two previous test flights SpaceX has conducted that ended in an explosive crash. Wednesday marked the first successful landing for a Starship prototype.
“We’ve had a successful soft touch down on the landing pad,” SpaceX engineer John Insprucker said during a livestream of the event. “That’s capping a beautiful test flight of Starship 10.”
It was unclear what caused the rocket to explode after landing, and the SpaceX livestream cut out before the conflagration.
[…]

Source: SpaceX aborts Mars prototype rocket nails landing for the first time – CNN

No wonder that Japanese businessman is trying to give away his tickets to space on Musk’s explody rides

How I cut GTA Online loading times by 70% (GTA fix JSON handler pls)

[…]

tl;dr

  • There’s a single thread CPU bottleneck while starting up GTA Online
  • It turns out GTA struggles to parse a 10MB JSON file
  • The JSON parser itself is poorly built / naive and
  • After parsing there’s a slow item de-duplication routine

R* please fix

If this somehow reaches Rockstar: the problems shouldn’t take more than a day for a single dev to solve. Please do something about it :<

You could either switch to a hashmap for the de-duplication or completely skip it on startup as a faster fix. For the JSON parser – just swap out the library for a more performant one. I don’t think there’s any easier way out.

Source: How I cut GTA Online loading times by 70%

Ticketcounter leaks data for millions of people, didn’t delete sensitive data and was outed

Data of visitors to Diergaarde Blijdorp, Apenheul, Dierenpark Amersfoort and dozens of other theme parks are on the street. Ticket seller Ticketcounter is also extorted for 3 tons.

An employee accidentally posted data online where they didn’t have to. As a result, the data could be found there for months (from 5 August 2020 to 22 February 2021). The data is then offered for sale on the dark web.

This mainly concerns data of people who have purchased day tickets via the website.

Source: Groot datalek bij Ticketcounter, ook hack bij InHolland – Emerce

It turns out they kept all this data they shouldn’t have.

The database contained the data of 1.5 million people who had purchased a ticket through Ticketcounter. These include their names, email addresses, telephone numbers, dates of birth and address details. If people with iDEAL have paid for their entrance ticket, their bank account number (IBAN) has also fallen into the wrong hands.

Source: Datalek Ticketcounter treft ook bezoekers musea en attracties

Why did they keep all this data? And why wasn’t it encrypted?

It was leaked when someone made a backup which a) wasn’t encrypted and b) was placed somewhere stunningly easy to find. Now they are being extorted to the tune of 7 BTC which they are not planning to give.

Ticketcounter makes it sound like they are some kind of victim in this but their security practices are abysmal and hopefully they will be fined a serious amount.

First Fully Weaponized Spectre Exploit Discovered Online

A fully weaponized exploit for the Spectre CPU vulnerability was uploaded on the malware-scanning website VirusTotal last month, marking the first time a working exploit capable of doing actual damage has entered the public domain.

The exploit was discovered by French security researcher Julien Voisin. It targets Spectre, a major vulnerability that was disclosed in January 2018.

According to its website, the Spectre bug is a hardware design flaw in the architectures of Intel, AMD, and ARM processors that allows code running inside bad apps to break the isolation between different applications at the CPU level and then steal sensitive data from other apps running on the same system.

The vulnerability, which won a Pwnie Award in 2018 for one of the best security bug discoveries of the year, was considered a milestone moment in the evolution and history of the modern CPU.

Its discovery, along with the Meltdown bug, effectively forced CPU vendors to rethink their approach to designing processors, making it clear that they cannot focus on performance alone, to the detriment of data security.

[…]

But today, Voisin said he discovered new Spectre exploits—one for Windows and one for Linux—different from the ones before. In particular, Voisin said he found a Linux Spectre exploit capable of dumping the contents of /etc/shadow, a Linux file that stores details on OS user accounts.

Such behavior is clearly malicious; however, there is no evidence that the exploit was used in the wild, as it could have also been uploaded on VirusTotal by a penetration tester as well.

[…]

the most interesting part of Voisin’s discovery is in the last paragraph of his blog, where he hints that he may have discovered who may be behind this new Spectre exploit.

“Attribution is trivial and left as an exercise to the reader,” the French security researcher said in a mysterious ending.

But while Voisin did not want to name the exploit author, several people were not as shy. Security experts on both Twitter and news aggregation service HackerNews were quick to spot that the new Spectre exploit might be a module for CANVAS, a penetration testing tool developed by Immunity Inc.

[…]

Source: First Fully Weaponized Spectre Exploit Discovered Online | The Record by Recorded Future

EU law requires companies to fix electronic goods for up to 10 years

Companies that sell refrigerators, washers, hairdryers, or TVs in the European Union will need to ensure those appliances can be repaired for up to 10 years, to help reduce the vast mountain of electrical waste that piles up each year on the continent.

The “right to repair,” as it is sometimes called, comes into force across the 27-nation bloc on Monday. It is part of a broader effort to cut the environmental footprint of manufactured goods by making them more durable and energy-efficient.

[…]

“This is a really big step in the right direction,” said Daniel Affelt of the environmental group BUND-Berlin, which runs several “repair cafes” where people can bring in their broken appliances and get help fixing them up again.

Modern appliances are often glued or riveted together, he said. “If you need special tools or have to break open the device, then you can’t repair it.”

Lack of spare parts is another problem, campaigners say. Sometimes a single broken tooth on a tiny plastic sprocket can throw a proverbial wrench in the works.

“People want to repair their appliances,” Affelt said. “When you tell them that there are no spare parts for a device that’s only a couple of years old then they are obviously really frustrated by that.”

Under the new EU rules, manufacturers will have to ensure parts are available for up to a decade, though some will only be provided to professional repair companies to ensure they are installed correctly.

Source: EU law requires companies to fix electronic goods for up to 10 years | Euronews

Far-Right Platform Gab Has Been Hacked, Private Data and all – not encrypted in the backend

When Twitter banned Donald Trump and a slew of other far-right users in January, many of them became digital refugees, migrating to sites like Parler and Gab to find a home that wouldn’t moderate their hate speech and disinformation. Days later, Parler was hacked, and then it was dropped by Amazon web hosting, knocking the site offline. Now Gab, which inherited some of Parler’s displaced users, has been badly hacked too. An enormous trove of its contents has been stolen—including what appears to be passwords and private communications.

On Sunday night the WikiLeaks-style group Distributed Denial of Secrets is revealing what it calls GabLeaks, a collection of more than 70 gigabytes of Gab data representing more than 40 million posts. DDoSecrets says a hacktivist who self-identifies as “JaXpArO and My Little Anonymous Revival Project” siphoned that data out of Gab’s backend databases in an effort to expose the platform’s largely right-wing users. Those Gab patrons, whose numbers have swelled after Parler went offline, include large numbers of Qanon conspiracy theorists, white nationalists, and promoters of former president Donald Trump’s election-stealing conspiracies that resulted in the January 6 riot on Capitol Hill.

DDoSecrets cofounder Emma Best says that the hacked data includes not only all of Gab’s public posts and profiles—with the exception of any photos or videos uploaded to the site—but also private group and private individual account posts and messages, as well as user passwords and group passwords. “It contains pretty much everything on Gab, including user data and private posts, everything someone needs to run a nearly complete analysis on Gab users and content,” Best wrote in a text message interview with WIRED. “It’s another gold mine of research for people looking at militias, neo-Nazis, the far right, QAnon, and everything surrounding January 6.”

DDoSecrets says it’s not publicly releasing the data due to its sensitivity and the vast amounts of private information it contains. Instead the group says it will selectively share it with journalists, social scientists, and researchers. WIRED viewed a sample of the data, and it does appear to contain Gab users’ individual and group profiles—their descriptions and privacy settings—public and private posts, and passwords. Gab CEO Andrew Torba acknowledged the breach in a brief statement Sunday.

Passwords for private groups are unencrypted, which Torba says the platform discloses to users when they create one. Individual user account passwords appear to be cryptographically hashed—a safeguard that may help prevent them from being compromised—but the level of security depends on the hashing scheme used and the strength of the underlying password.

[…]

According to DDoSecrets’ Best, the hacker says that they pulled out Gab’s data via a SQL injection vulnerability in the site—a common web bug in which a text field on a site doesn’t differentiate between a user’s input and commands in the site’s code, allowing a hacker to reach in and meddle with its backend SQL database.

[…]

Source: Far-Right Platform Gab Has Been Hacked—Including Private Data | WIRED

This is a comedy of bad security on the part of Gab.

Rocket Lab Unveils Plans for New 8-Ton Class Reusable Rocket for Mega-Constellation Deployment. Probably won’t explode as much as SpaceX. Also to become publically traded.

Rocket Lab today unveiled plans for its Neutron rocket, an advanced 8-ton payload class launch vehicle tailored for mega-constellation deployment, interplanetary missions and human spaceflight.

Neutron will build on Rocket Lab’s proven experience developing the reliable workhorse Electron launch vehicle, the second most frequently launched U.S. rocket annually since 2019. Where Electron provides dedicated access to orbit for small satellites of up to 300 kg (660 lb), Neutron will transform space access for satellite constellations and provide a dependable, high-flight-rate dedicated launch solution for larger commercial and government payloads.

“Rocket Lab solved small launch with Electron. Now we’re unlocking a new category with Neutron,” said Peter Beck, Rocket Lab founder and CEO.

[…]

The medium-lift Neutron rocket will be a two-stage launch vehicle that stands 40 meters (131 feet) tall with a 4.5-meter (14.7 ft) diameter fairing and a lift capacity of up to 8,000 kg (8 metric tons) to low-Earth orbit, 2,000 kg to the Moon (2 metric tons), and 1,500 kg to Mars and Venus (1.5 metric tons). Neutron will feature a reusable first stage designed to land on an ocean platform, enabling a high launch cadence and decreased launch costs for customers. Initially designed for satellite payloads, Neutron will also be capable of International Space Station (ISS) resupply and human spaceflight missions.

Neutron launches will take place from Virginia’s Mid-Atlantic Regional Spaceport located at the NASA Wallops Flight Facility. By leveraging the existing launch pad and integration infrastructure at the Mid-Atlantic Regional Spaceport, Rocket Lab eliminates the need to build a new pad, accelerating the timeline to first launch, expected in 2024.

Source: Rocket Lab Unveils Plans for New 8-Ton Class Reusable Rocket for Mega-Constellation Deployment | Rocket Lab

Rocket Lab, an End-to-End Space Company and Global Leader in Launch, to Become Publicly Traded Through Merger with Vector Acquisition Corporation

End-to-end space company with an established track record, uniquely positioned to extend its lead across a launch, space systems and space applications market forecast to grow to $1.4 trillion by 2030

One of only two U.S. commercial companies delivering regular access to orbit: 97 satellites deployed for governments and private companies across 16 missions

Second most frequently launched U.S. orbital rocket, with proven Photon spacecraft platform already operating on orbit and missions booked to the Moon, Mars and Venus

Transaction will provide capital to fund development of reusable Neutron launch vehicle with an 8-ton payload lift capacity tailored for mega constellations, deep space missions and human spaceflight

[…]

Transaction is expected to close in Q2 2021, upon which Rocket Lab will be publicly listed on the Nasdaq under the ticker RKLB

Current Rocket Lab shareholders will own 82% of the pro forma equity of combined company

Source: Rocket Lab, an End-to-End Space Company and Global Leader in Launch, to Become Publicly Traded Through Merger with Vector Acquisition Corporation

SmartThings bricks all hardware (2013 – 2021) wtf?

If you own a 2013 SmartThings hub (that’s the original) or a SmartThings Link for the Nvidia Shield TV, your hardware will stop working on June 30 of this year. The device depreciation is part of the announced exodus from manufacturing and supporting its own hardware and the Groovy IDE that Samsung Smartthings announced last summer.  SmartThings has set up a support page for customers still using those devices to help those users transition to newer hubs.

[…]

Those who purchased one of these products in the last three years (Kevin just missed the window with his March 2018 purchase of the SmartThings Link for the Nvidia Shield) can share their proof-of-purchase at Samsung’s Refund Portal to find out if they are eligible for a refund. And in a win for those of us worried about e-waste, Samsung is also planning to recycle the older gear (or it will at least send you a prepaid shipping label so you can send back the devices for theoretical recycling).

[…]

Source: SmartThings starts saying goodbye to its hardware – Stacey on IoT | Internet of Things news and analysis

At least they are willing to recycle some of the stuff but this is why you don’t buy stuff that is dependent on the cloud.

Same Energy: Visual search engine for pictures

This search engine finds other pictures with the same “energy” as the picture you select on the homepage, upload or paste the URL of

 

We believe that image search should be visual, using only a minimum of words. And we believe it should integrate a rich visual understanding, capturing the artistic style and overall mood of an image, not just the objects in it.

We hope Same Energy will help you discover new styles, and perhaps use them as inspiration. Try it with one of these images:

This website is in beta and will be regularly updated in response to your feedback.

[…]

Same Energy’s core search uses deep learning. The most similar published work is CLIP by OpenAI.

The default feeds available on the home page are algorithmically curated: a seed of 5-20 images are selected by hand, then our system builds the feed by scanning millions of images in our index to find good matches for the seed images. You can create feeds in just the same way: save images to create a collection of seed images, then look at the recommended images.

Source: About | Same Energy

India’s New Cyber Law Goes Live: Subtracts Safe Harbor Protections, Adds Compelled Assistance Demands For Intermediaries, Massive surveillance infrastructure

New rules for social media companies and other hosts of third-party content have just gone into effect in India. The proposed changes to India’s 2018 Intermediary Guidelines are now live, allowing the government to insert itself into content moderation efforts and make demands of tech companies some simply won’t be able to comply with.

Now, under the threat of fines and jail time, platforms like Twitter (itself a recent combatant of the Indian government over its attempts to silence people protesting yet another bad law) can be held directly responsible for any “illegal” content it hosts, even as the government attempts to pay lip service to honoring long-standing intermediary protections that immunized them from the actions of their users.

[…]

turns a whole lot of online discourse into potentially illegal content.

[…]

The new mandates demand platforms operating in India proactively scan all uploaded content to ensure it complies with India’s laws.

The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.

This obligation is not only impossible to comply with (and is prohibitively expensive for smaller platforms and sites/online forums that don’t have access to AI tools), it opens up platforms to prosecution simply for being unable to do the impossible. And complying with this directive to implement this demand undercuts the Safe Harbour protections granted to intermediaries by the Indian government.

If you’re moderating all content prior to it going “live,” it’s no longer possible to claim you’re not acting as an editor or curator. The Indian government grants Safe Harbour to “passive” conduits of information. The new law pretty much abolishes those because complying with the law turns intermediaries from “passive” to “active.”

Broader and broader it gets, with the Indian government rewriting its “national security only” demands to cover “investigation or detection or prosecution or prevention of offence(s).” In other words, the Indian government can force platforms and services to provide information and assistance within 72 hours of notification to almost any government agency for almost any reason.

This assistance includes “tracing the origin” of illegal content — something that may be impossible to comply with since some platforms don’t collect enough personal information to make identification possible. Any information dug up by intermediaries in support of government action must be retained for 180 days whether or not the government makes use of it.

More burdens: any intermediary with more than 5 million users must establish permanent residence in India and provide on-call service 24/7. Takedown compliance has been accelerated from 36 hours of notification to 24 hours.

Very few companies will be able to comply with most of these directives. No company will be able to comply with them completely. And with the government insisting on adding more “eye of the beholder” content to the illegal list, the law encourages pre-censorship of any questionable content and invites regulators and other government agencies to get into the moderation business.

[…]

Source: India’s New Cyber Law Goes Live: Subtracts Safe Harbor Protections, Adds Compelled Assistance Demands For Intermediaries | Techdirt

Sub-diffraction optical writing enables data storage at the nanoscale – on disk

The demand to store ever-increasing volumes of information has resulted in the widespread implementation of data centers for Big Data. These centers consume massive amounts of energy (about 3% of global electricity supply) and rely on magnetization-based hard disk drives with limited storage capacity (up to 2 TB per disk) and lifespan (three to five years). Laser-enabled optical data storage is a promising and cost-effective alternative for meeting this unprecedented demand. However, the diffractive nature of light has limited the size to which bits can be scaled, and as a result, the storage capacity of optical disks.Researchers at USST, RMIT and NUS have now overcome this limitation by using earth-rich lanthanide-doped upconversion nanoparticles and graphene oxide flakes. This unique material platform enables low-power optical writing nanoscale information bits.A much-improved data density can be achieved for an estimated storage capacity of 700 TB on a 12-cm optical disk, comparable to a storage capacity of 28,000 Blu-ray disks. Furthermore, the technology uses inexpensive continuous-wave lasers, reducing operating costs compared to traditional optical writing techniques using expensive and bulky pulsed lasers.This technology also offers the potential for optical lithography of nanostructures in carbon-based chips under development for next-generation nanophotonic devices.

Source: Sub-diffraction optical writing enables data storage at the nanoscale

Using deep-sea fiber optic cables to detect earthquakes

Seismologists at Caltech working with optics experts at Google have developed a method to use existing underwater telecommunication cables to detect earthquakes. The technique could lead to improved earthquake and tsunami warning systems around the world.

[…]

evious efforts to use optical fibers to study seismicity have relied on the addition of sophisticated scientific instruments and/or the use of so-called “dark fibers,” fiber optic cables that are not actively being used.

Now Zhongwen Zhan (Ph.D. ’13), assistant professor of geophysics at Caltech, and his colleagues have come up with a way to analyze the light traveling through “lit” fibers—in other words, existing and functioning submarine cables—to detect earthquakes and ocean waves without the need for any additional equipment. They describe the new method in the February 26 issue of the journal Science.

[…]

The cable networks work through the use of lasers that send pulses of information through glass fibers bundled within the cables to deliver data at rates faster than 200,000 kilometers per second to receivers at the other end. To make optimal use of the cables—that is, to transfer as much information as possible across them—one of the things operators monitor is the polarization of the light that travels within the fibers. Like other light that passes through a polarizing filter, laser light is polarized—meaning, its electric field oscillates in just one direction rather than any which way. Controlling the direction of the electric field can allow multiple signals to travel through the same fiber simultaneously. At the receiving end, devices check the state of polarization of each signal to see how it has changed along the path of the cable to make sure that the signals are not getting mixed.

[…]

On land, all sorts of disturbances, such as changes in temperature and even lightning strikes, can change the polarization of light traveling through fiber optic cables. Because the temperature in the deep ocean remains nearly constant and because there are so few disturbances there, the change in polarization from one end of the Curie Cable to the other remains quite stable over time, Zhan and his colleagues found.

However, during earthquakes and when storms produce large ocean waves, the polarization changes suddenly and dramatically, allowing the researchers to easily identify such events in the data.

Currently, when earthquakes occur miles offshore, it can take minutes for the seismic waves to reach land-based seismometers and even longer for any tsunami waves to be verified. Using the new technique, the entire length of a submarine cable acts as a single sensor in a hard-to-monitor location. Polarization can be measured as often as 20 times per second. That means that if an earthquake strikes close to a particular area, a warning could be delivered to the potentially affected areas within a matter of seconds.

During the nine months of testing reported in the new study (between December 2019 and September 2020), the researchers detected about 20 moderate-to-large earthquakes along the Curie Cable, including the magnitude-7.7 that took place off of Jamaica on January 28, 2020.

Although no tsunamis were detected during the study, the researchers were able to detect changes in polarization produced by ocean swells that originated in the Southern Ocean. They believe the changes in polarization observed during those events were caused by pressure changes along the seafloor as powerful waves traveled past the cable. “This means we can detect ocean waves, so it is plausible that one day we will be able to detect tsunami waves,” says Zhan.

Zhan and his colleagues at Caltech are now developing a machine learning algorithm that would be able to determine whether detected changes in polarization are produced by earthquakes or rather than some other change to the system, such as a ship or crab moving the . They expect that the entire detection and notification process could be automated to provide critical information in addition to the data already collected by the of land-based seismometers and the buoys in the Deep- Assessment and Reporting of Tsunamis (DART) system, operated by the National Oceanic and Atmospheric Administration’s National Data Buoy Center.

[…]

Source: Using deep-sea fiber optic cables to detect earthquakes

Extension shows the monopoly big tech has on your browsing – you always route your traffic through them

A new extension for Google Chrome has made explicit how most popular sites on the internet load resources from one or more of Google, Facebook, Microsoft and Amazon.

The extension, Big Tech Detective, shows the extent to which websites exchange data with these four companies by reporting on them. It also optionally blocks sites that request such data. Any such request is also effectively a tracker, since the provider sees the IP number and other request data for the user’s web browser.

The extension was built by investigative data reporter Dhruv Mehrotra in association with the Anti-Monopoly Fund at the Economic Security Project, a non-profit research group financed by the US-based Hopewell Fund in Washington DC.

Cara Rose Defabio, editor at the Economic Security Project, said: “Big Tech Detective is a tool that pulls the curtain back on exactly how much control these corporations have over the internet. Our browser extension lets you ‘lock out’ Google, Amazon, Facebook and Microsoft, alerting you when a website you’re using pings any one of these companies… you can’t do much online without your data being routed through one of these giants.”

[…]

That, perhaps, is an exaggeration. Big Tech Detective will spot sites that use Google Analytics to report on web traffic, or host Google ads, or use a service hosted on Amazon Web Services such as Chartbeat analytics – which embeds a script that pings its service every 15 seconds according to this post – but that is not the same as routing your data through the services.

In terms of actual data collection and analysis, we would guess that Google and Facebook are ahead of AWS and Microsoft, and munging together infrastructure services with analytics and tracking is perhaps unhelpful.

Another point to note is that a third-party service hosted on a public cloud server at AWS, Microsoft or Google is distinct from services run directly by those companies. Public cloud is an infrastructure choice and the infrastructure provider does not get that data other than being able to see that there is traffic.

[Note: This is untrue. They also get to see where the traffic is from, where it goes to, how it is routed, how many connections there are, the size of the traffice being sent. This metadata is often more valuable than the actual data being sent]

Dependencies

Defabio made the point, though, that the companies behind public cloud have huge power, referencing Amazon’s decision to “refuse hosting service to the right wing social app Parler, effectively shutting it down.” While there was substantial popular approval of the action, it was Amazon’s decision, rather than one based on law and regulation.

She argued that these giant corporations should be broken up, so that Amazon the retailer is separate from AWS, for example. The release of the new extension is timed to coincide with US government hearings on digital competition, drawing on research from last year.

[…]

Source: Ever felt that a few big tech companies are following you around the internet? That’s because … they are • The Register

Apple, forced to rate product repair potential in France, gives itself modest marks – still lying, they should be worse

Apple, on its French website, is now publishing repairability scores for its notoriously difficult to repair products, in accordance with a Gallic environmental law enacted a year ago.

Cook & Co score themselves on repairability however, and Cupertino kit sometimes fares better under internal interpretation of the criteria [PDF] than it does under ratings awarded by independent organizations.

For example, Apple gave its 2019 model year 16-inch MacBook Pro (A2141) a repairability score of 6.3 out of 10. According to iFixit, a repair community website, that MacBook Pro model deserves a score of 1 out of 10.

Apple’s evaluation of its products aligns more closely with independent assessment when it comes to phones. Apple gives its iPhone 12 Pro a repairability score of six, which matches the middling score bestowed by iFixit.

“It’s self-reporting right now,” said Gay Gordon-Byrne, executive director of The Repair Association, a repair advocacy group, in an email to The Register. “No audit, no validation, yet. I think there is another year before there are any penalties for lying.”

[…]

Source: Apple, forced to rate product repair potential in France, gives itself modest marks • The Register

1Password has none, KeePass has none… So why are there seven embedded trackers in the LastPass Android app?

A security researcher has recommended against using the LastPass password manager Android app after noting seven embedded trackers. The software’s maker says users can opt out if they want.

[…]

The Exodus report on LastPass shows seven trackers in the Android app, including four from Google for the purpose of analytics and crash reporting, as well as others from AppsFlyer, MixPanel, and Segment. Segment, for instance, gathers data for marketing teams, and claims to offer a “single view of the customer”, profiling users and connecting their activity across different platforms, presumably for tailored adverts.

LastPass has many free users – is it a problem if its owner seeks to monetise them in some way? Kuketz said it is. Typically, the way trackers like this work is that the developer compiles code from the tracking provider into their application. The gathered information can be used to build up a profile of the user’s interests from their activities, and target them with ads.

Even the app developers do not know what data is collected and transmitted to the third-party providers, said Kuketz, and the integration of proprietary code could introduce security risks and unexpected behaviour, as well as being a privacy risk. These things do not belong in password managers, which are security-critical, he said.

Kuketz also investigated what data is transmitted by inspecting the network traffic. He found that this included details about the device being used, the mobile operator, the type of LastPass account, the Google Advertising ID (which can connect data about the user across different apps). During use, the data also shows when new passwords are created and what type they are. Kuketz did not suggest that actual passwords or usernames are transmitted, but did note the absence of any opt-out dialogs, or information for the user about the data being sent to third parties. In his view, the presence of the trackers demonstrates a suboptimal attitude to security. Kuketz recommended changing to a different password manager, such as the open-source KeePass.

Do all password apps contain such trackers? Not according to Exodus. 1Password has none. KeePass has none. The open-source Bitwarden has two for Google Firebase analytics and Microsoft Visual Studio crash reporting. Dashlane has four. LastPass does appear to have more than its rivals. And yes, lots of smartphone apps have trackers: today, we’re talking about LastPass.

[…]

“All LastPass users, regardless of browser or device, are given the option to opt-out of these analytics in their LastPass Privacy Settings, located in their account here: Account Settings > Show Advanced Settings > Privacy.

Source: 1Password has none, KeePass has none… So why are there seven embedded trackers in the LastPass Android app? • The Register

Looking for this option was definitely not easy to find.

I just bought a year’s subscription as I thought the $2.11 / month price point was OK. They added on a few cents and then told me this price was excl VAT. Not doing very well on the trustworthyness scale here.

Half a million stolen French medical records, lab results, feeble excuses

[…]

Here in France, we’ve just experienced the country’s biggest ever data breach of customer records, involving some half a million medical patients. Worse, the data wasn’t even sold or held to ransom by dark web criminals: it was just given away so that anyone could download it.

Up to 60 fields of personal data per patient are now blowing around in the internet winds. Full name, address, email, mobile phone number, date of birth, social security number, blood group, prescribing doctor, reason for consultation (such as “pregnancy”, “brain tumour”, “deaf”, “HIV positive”) and so on – it’s all there, detailed across 491,840 lines of plain text.

Data journalism couldn’t be easier, and indeed the newspaper hacks have been on the beat, contacting the doctors listed in the file and phoning up some of the patients on their mobile numbers to ask how they feel about the data breach. The doctors knew nothing about it, and of course the patients whose personal info had been stolen – including Hervé Morin, ex-Minister of Defence, as it turns out – hadn’t the faintest idea.

According to an investigation by daily newspaper Libération, warning signs that something was afoot were first reported on 12 February in a blog by Damien Bancal at security outfit Zataz. Some dark web spivs began discussing in Turkish-language channels on Telegram about how to sell some medical records stolen from a French hospital. Some of them then tried independently to put the data on the market and got into an argument that spilled over into Russian-language channels.

One of them, it seems, got pissed off and decided to take revenge by posting an extract of the data publicly. This was rapidly spread around Telegram’s other lesser spivlet channels and soon afterwards ended up being shared on conventional social media.

A closer look at the file reveals that it didn’t come from a hospital after all. It turns out the various dates on the patient records refer not to doctors’ appointments but to when patients had to submit a test specimen: in other words, the data is likely to have been stolen from French bio-medical laboratories conducting the specimen analysis.

Further probing by Libé revealed that the hack may relate to data stored using a system called Mega-Bus from Medasys, a company since absorbed into Dedalus France. Dating back to 2009, Mega-Bus hasn’t been updated and laboratories have been abandoning it for other solutions over the last couple of years. No patient records entered into these newer systems can be found in the stolen file, only pre-upgrade stuff entered into Mega-Bus, apparently.

[…]

Source: Half a million stolen French medical records, drowned in feeble excuses • The Register

GameStop short-sellers have lost $1.9 billion in just 2 days amid the stock’s latest spike

Short sellers lost $664 million on Wednesday as GameStop shares spiked 104% in the final 30 minutes of trading, S3 Partners said.The stock’s 84% intraday gain on Thursday fueled another $1.19 billion in mark-to-market losses.

Source: GameStop short-sellers have lost $1.9 billion in just 2 days amid the stock’s latest spike | Markets Insider

Use AdNauseum to Block Ads and Confuse Google’s Advertising

In an online world in which countless systems are trying to figure out what exactly you enjoy so they can serve you up advertising about it, it really fucks up their profiling mechanisms when they think you like everything. And to help you out with this approach, I recommend checking out the Chrome/Firefox extension AdNauseum. You won’t find it on the Chrome Web Store, however, as Google frowns at extensions that screw up Google’s efforts to show you advertising for some totally inexplicable reason. You’ll have to install it manually, but it’s worth it.

[…]

AdNauseum works on a different principle. As Lee McGuigan writes over at the MIT Technology Review:

“AdNauseam is like conventional ad-blocking software, but with an extra layer. Instead of just removing ads when the user browses a website, it also automatically clicks on them. By making it appear as if the user is interested in everything, AdNauseam makes it hard for observers to construct a profile of that person. It’s like jamming radar by flooding it with false signals. And it’s adjustable. Users can choose to trust privacy-respecting advertisers while jamming others. They can also choose whether to automatically click on all the ads on a given website or only some percentage of them.”

McGuigan goes on to describe the various experiments he worked on with AdNauseum founder Helen Nissenbaum, allegedly proving that the extension can make it past Google’s various checks for fraudulent or otherwise illegitimate clicks on advertising. Google, as you might expect, denies the experiments actually prove anything, and maintains that a “vast majority” of these kinds of clicks are detected and ignored.

[…]

Once you’ve installed AdNauseum, you’ll be presented with three simple options:

undefined
Screenshot: David Murphy

Feel free to enable all three, but heed AdNauseum’s warning: You probably don’t want to use the extension alongside another adblocker, as the two will conflict and you probably won’t see any added benefit.

As with most adblockers, there are plenty of options you can play with if you dig deeper into AdNauseum’s settings.

[…]

note that AdNauseum still (theoretically) generates revenue for the sites tracking you. That in itself might cause you to adopt a nuclear approach vs. an obfuscation-by-noise approach. Your call.

Source: Use AdNauseum to Block Ads and Confuse Google’s Advertising

Porsche says synthetic fuel can be as clean as EVs

In a recent interview with Evo magazine, Porsche VP of Motorsport and GT cars, Dr. Frank Walliser, says that synthetic fuels, also called eFuels, can reduce the carbon dioxide emissions of existing ICE cars by as much as 85 percent. And, he says, when you account for the wheel-to-well impact of manufacturing the EV, it’s a wash.

Synthetic fuels are made by extracting hydrogen via renewable energy, and capturing it liquid form with carbon dioxide. Compared to pump fuel, eFuels emit fewer particulates and nitrogen oxide as well. That’s because, as Walliser explains, they are composed of eight to 10 ingredients while the dead plants we mine contain 30 to 40, many of which are simply burned and emitted as pollution in the process.

While Porsche is continuing to develop EVs like the Taycan, it says that ICEs will continue to exist in the market for many years to come. Synthetic fuels, along with electrified cars, would be part of a multi-pronged approach to reducing emissions as quickly as possible. Mazda gave a similar statement a couple weeks earlier when it became the first car company to join Europe’s eFuel Alliance.

[…]

 

Source: Porsche says synthetic fuel can be as clean as EVs | Autoblog

How “ugly” labels on imperfect food can increase purchase of unattractive produce

[…]

According to a recent report by the National Academies of Science, Engineering and Medicine (2020), each year in the U.S. farmers throw away up to 30% of their crops, equal to 66.5 million tons of edible produce, due to cosmetic imperfections.

[…]

They discover that consumers expect unattractive produce to be less tasty and, to a smaller extent, less healthy than attractive produce, which leads to its rejection. They also find that emphasizing aesthetic flaws via ‘ugly’ labeling (e.g., “Ugly Cucumbers”) can increase the purchase of unattractive produce. This is because ‘ugly’ labeling points out the aesthetic flaw in the produce, making it clear to consumers that there are no other deficiencies in the produce other than attractiveness. Consumers may also reevaluate their reliance on visual appearance as a basis for judging the tastiness and healthiness of produce; ‘ugly’ labeling makes them aware of the limited nature of their spontaneous objection to unattractive produce.

[…]

“We sold both unattractive and attractive produce at a farmer’s market and find that consumers were more likely to purchase unattractive produce over attractive produce when the unattractive produce was labeled ‘ugly’ compared to when unattractive produce was not labeled in any specific way. ‘Ugly’ labeling also generated greater profit margins relative to when unattractive produce was not labeled in any specific way—a great solution for sellers to make a profit while reducing food waste.” In the second study, participants were told that they could win a lottery worth $30, and could keep all the cash or allocate some of the lottery earnings to purchase either a box of attractive produce or unattractive produce. ‘Ugly’ labeling increased the likelihood that consumers would use their lottery earnings to purchase a box of unattractive rather than attractive produce.

In Studies 3 and 4, ‘ugly’ labeling positively impacts taste and health expectations, which led to higher choice likelihood of unattractive produce over attractive produce. Study 5 considers how ‘ugly’ labeling might alter the effectiveness of price discounts. Typically, when retailers sell unattractive produce, they offer a discount of 20%-50%. Cornil says that “We show that ‘ugly’ labeling works best for moderate price discounts (i.e., 20%) rather than steep price discounts (i.e., 60%) because a large discount signals low quality, which nullifies the positive effect of the ‘ugly’ label.” This suggests that by simply adding the ‘ugly’ label, retailers selling unattractive produce can reduce those discounts and increase profitability.

The last two studies demonstrate that ‘ugly’ labeling is more effective than another popular label, ‘imperfect.’

[…]

Importantly, these findings largely contrast with managers’ beliefs. “While grocery store managers believed in either not labeling unattractive produce in any specific way or using ‘imperfect’ labeling, we show that ‘ugly’ labeling is far more effective,” says Hoegg

[…]

Source: How “ugly” labels can increase purchase of unattractive produce

CNAME DNS-based tracking defies your browser privacy defenses

Boffins based in Belgium have found that a DNS-based technique for bypassing defenses against online tracking has become increasingly common and represents a growing threat to both privacy and security.

In a research paper to be presented in July at the 21st Privacy Enhancing Technologies Symposium (PETS 2021), KU Leuven-affiliated researchers Yana Dimova, Gunes Acar, Lukasz Olejnik, Wouter Joosen, and Tom Van Goethem delve into increasing adoption of CNAME-based tracking, which abuse DNS records to erase the distinction between first-party and third-party contexts.

“This tracking scheme takes advantage of a CNAME record on a subdomain such that it is same-site to the including web site,” the paper explains. “As such, defenses that block third-party cookies are rendered ineffective.”

[…]

A technique known as DNS delegation or DNS aliasing has been known since at least 2007 and showed up in privacy-focused research papers in 2010 [PDF] and 2014 [PDF]. Based on the use of CNAME DNS records, the counter anti-tracking mechanism drew attention two years ago when open source developer Raymond Hill implemented a defense in the Firefox version of his uBlock Origin content blocking extension.

CNAME cloaking involves having a web publisher put a subdomain – e.g. trackyou.example.com – under the control of a third-party through the use of a CNAME DNS record. This makes a third-party tracker associated with the subdomain look like it belongs to the first-party domain, example.com.

The boffins from Belgium studied the CNAME-based tracking ecosystem and found 13 different companies using the technique. They claim that the usage of such trackers is growing, up 21 per cent over the past 22 months, and that CNAME trackers can be found on almost 10 per cent of the top 10,000 websites.

What’s more, sites with CNAME trackers have an average of about 28 other tracking scripts. They also leak data due to the way web architecture works. The researchers found cookie data leaks on 7,377 sites (95%) out of the 7,797 sites that used CNAME tracking. Most of these were the result of third-party analytics scripts setting cookies on the first-party domain.

Not all of these leaks exposed sensitive data but some did. Out of 103 websites with login functionality tested, the researchers found 13 that leaked sensitive info, including the user’s full name, location, email address, and authentication cookie.

“This suggests that this scheme is actively dangerous,” wrote Dr Lukasz Olejnik, one of the paper’s co-authors, an independent privacy researcher, and consultant, in a blog post. “It is harmful to web security and privacy.”

[…]

In addition, the researchers report that ad tech biz Criteo switches specifically to CNAME tracking – putting its cookies into a first-party context – when its trackers encountered users of Safari, which has strong third-party cookie defenses.

According to Olejnik, CNAME tracking can defeat most anti-tracking techniques and there are few defenses against it.

Firefox running the add-on uBlock Origin 1.25+ can see through CNAME deception. So too can Brave, which recently had to repair its CNAME defenses due to problems it created with Tor.

Chrome falls short because it does not have a suitable DNS-resolving API for uBlock Origin to hook into. Safari will limit the lifespan of cookies set via CNAME cloaking but doesn’t provide a way to undo the domain disguise to determine whether the subdomain should be blocked outright.

[…]

Source: What’s CNAME of your game? This DNS-based tracking defies your browser privacy defenses • The Register