A Methodology for Quantifying the Value of Cybersecurity Investments in the Navy

RAND Corporation researchers developed and supported the implementation of a methodology to assess the value of resource options for U.S. Navy cybersecurity investments. The proposed methodology features 12 scales in two categories (impact and exploitability) that allow the Navy to score potential cybersecurity investments in the Program Objective Memorandum (POM) process. The authors include a test implementation using publicly available historical U.S. Navy data to demonstrate how the methodology facilitates valuable comparisons of potential cybersecurity investments.

When compared with existing methods used by the Navy, this methodology could improve the consistency of ratings and provide a more defined structure for thinking through the risk reduction and prioritization of different investments.

[…]

A major advantage of this methodology is its simplicity

  • No complex modeling is required. The risk matrixes align with U.S. Department of Defense processes, making the methodology more approachable for analysts. The level of effort required is further reduced by the need to assess only the risk factors that are relevant to an investment.

Information security economic approaches are not directly applicable to the Navy context

  • Existing models have multiple issues that make it very challenging to apply them in the context of the Navy—not the least of which is their dependency on the monetization of loss. Ultimately, the lack of information that the Navy has at its fingertips regarding the cybersecurity state of systems and the potential impact of future and ongoing investments is a key limiting factor.
  • Although complex models offer greater potential for precision and accuracy, it comes at the expense of computational, data, and understandability needs, which are a key challenge area for the Navy.

[…]

Source: A Methodology for Quantifying the Value of Cybersecurity Investments in the Navy | RAND

This is a risk assessment methodology which is specific to the domain the navy works in, which is different from the domains of most commercial companies.

plant controls machete

plant machete

This installation enables a live plant to control a machete. plant machete has a control system that reads and utilizes the electrical noises found in a live philodendron. The system uses an open source micro-controller connected to the plant to read varying resistance signals across the plant’s leaves. Using custom software, these signals are mapped in real-time to the movements of the joints of the industrial robot holding a machete. In this way, the movements of the machete are determined based on input from the plant. Essentially the plant is the brain of the robot controlling the machete determining how it swings, jabs, slices and interacts in space.

Source: plant machete — David Bowen

Why Reddit Is Losing It Over Samsung’s New Privacy Policy – it’s an incredible data grab

Samsung recently updated it privacy policy for all users with a Samsung account, effective Oct. 1. One Redditor read the policy, did not like what they saw, and shared it to r/android, highlighting what they consider to be the doc’s worst policy points. The thread blew up, with Android users aplenty decrying Samsung’s new policy. But why is everyone so pissed off, and is any of it worth worrying about? Let’s explore.

Samsung’s privacy policy is a bit creepy

From the jump, the new policy doesn’t look good. In fact, it appears downright invasive. There are the standard data giveaways we’ve come to expect: When you create a Samsung account, you must give over personal information like your name, age, address, email address, gender, etc. Par for the course.

However, Samsung also notes it will collect data such as credit card information, usernames and passwords for third-party services, photos, contacts, text logs, recordings of your voice generated during voice commands, and location data, including precise location data as well as nearby wifi access points and cell towers. It might come as a surprise to know a company like Samsung can keep your chat transcripts, contacts, and voice recordings, but there’s precedent: Apple found itself in hot water when third-party contractors revealed they were able to listen in on audio recordings from Siri requests, which included all kinds of personal conversations and activities.

Samsung also tracks your general activity via cookies, pixels, web beacons, and other means. The company claims this tracking is done for a variety of reasons, including remembering your information to avoid you having to retype it in the future, and to better learn how you use their services. To achieve these goals, it collects just about everything there is to know about your device, including your IP address, device model, device settings, websites you visit, and apps you download, among many others. The policy does remind you to adjust your privacy settings if you’re uncomfortable with this default tracking (as if anyone wouldn’t be).

The company says it has a lot of uses for this information, including ad delivery, communication with customers, enhancing their services, improving their business, identifying and preventing fraud and criminal activity, and to comply with “applicable legal requirements.” Further, they reserve the right to share your information with “subsidiaries and affiliates,” “business partners and third-parties,” as well as law enforcement and other authorities. In short, depending on the circumstances, your Samsung data could end up in the hands of a lot of third parties.

But that’s not everything. Under the “Notice to California Residents” section is where the juiciest policies emerge. While most of the info is the same, if broken down in a different way, there is one additional note about data Samsung collects: biometric information. The company doesn’t elaborate, but this entry implies Samsung obtains data from face and fingerprint scans, when traditionally, this information is stored on-device. Apple, for example, doesn’t have access to your face scans on your iPhone. Obviously, this is potentially concerning.

In addition, the California Residents section also discusses what data Samsung sells to third parties. Samsung says in the 12 months before this new policy went into effect, it may have sold data of yours, including device identifiers (cookies, pixel tags, etc.), purchase histories or tendencies, and network activity, including how you interact with websites.

[…]

If you’re eyeing your Galaxy Z Flip with newfound skepticism, I don’t blame you. Unfortunately, if you dive into the privacy policies for most of your other tech, you’ll be similarly disturbed. Samsung is hardly the only collecting, sharing, and selling your data.

One Redditor does make a great point about the redundancy of privacy violations here. Sure, Google might have similar policies in place, but since Samsung runs Android, you’re really dealing with two meddling companies instead, not one:

Considering the prices for their hardware, the un-removable bloatware that is generally inferior to the Google software, and anti-Right-to-Repair campaigns (and reflections in their hardware), I see no reason to buy their phones over Google’s. I’ll have just one company with intrusive insight into my personal device at a time, thank you.

[…]

Source: Why Reddit Is Losing It Over Samsung’s New Privacy Policy

The Onion defends right to parody in very real supreme court brief supporting local satirist vs Police who were made fun of

The Onion, the long-running satirical publication, has filed a very real legal document with the US supreme court, urging it to take on a case centered on the right to parody. And in order to make a serious legal point, the filing does what the Onion does best, offering a big helping of total nonsense.

Claiming global Onion readership of 4.3 trillion, the filing describes the publication as “the single most powerful and influential organization in human history”. It’s the source of 350,000 jobs at its offices and “manual labor camps”, and it “owns and operates the majority of the world’s transoceanic shipping lanes, stands on the nation’s leading edge on matters of deforestation and strip mining, and proudly conducts tests on millions of animals daily”.

With such power, why does the Onion feel the need to weigh in on a mundane court case? “To protect its continued ability to create fiction that may ultimately merge into reality,” the filing asserts. “The Onion’s writers also have a self-serving interest in preventing political authorities from imprisoning humorists. This brief is submitted in the interest of at least mitigating their future punishment.”

The outlet is concerned about the outcome of a case it describes in a headline: “Ohio Police Officers Arrest, Prosecute Man Who Made Fun of Them on Facebook”. It sounds like an Onion headline, the filing points out, but it’s not.

A screenshot of the Onion website shows several different stories all with the same headline: 'No way to prevent this' says only nation where this regularly happens.
‘No way to prevent this’: why the Onion’s gun violence headline is so devastating
Read more

In 2016, Anthony Novak was arrested for making a Facebook page that parodied the local police page. He was charged with disrupting a public service but was acquitted. The next year, he sued the department, arguing it was retaliating against him for using his right to free speech, as Cleveland.com reported.

In May, a US appeals court backed the police in the case, a finding Novak’s lawyer said “sets dangerous precedent undermining free speech”. Last week, Novak appealed against the case to the supreme court, leading to the Onion’s filing – what’s known as an amicus brief, a filing by an outside party seeking to influence the court.

In one of its less amusing sections, the brief argues that the appeals court ruling “imperils an ancient form of discourse. The court’s decision suggests that parodists are in the clear only if they pop the balloon in advance by warning their audience that their parody is not true. But some forms of comedy don’t work unless the comedian is able to tell the joke with a straight face.”

The filing highlights the history of parody and its social function: “It adopts a particular form in order to critique it from within”. To demonstrate, the Onion cites one of its own greatest headlines: “Supreme court rules supreme court rules”.

The document serves as a rare glimpse behind the comedy curtain – an explanation of how jokes work – even as it serves as a more traditional legal document, pointing to relevant court cases and using words like “dispositive”.

The city of Parma has until 28 October to provide a response in a case that would be heard next year if the high court opts to consider it.

In the meantime, “the Onion cannot stand idly by in the face of a ruling that threatens to disembowel a form of rhetoric that has existed for millennia, that is particularly potent in the realm of political debate, and that, purely incidentally, forms the basis of The Onion’s writers’ paychecks”.

Source: The Onion defends right to parody in very real supreme court brief supporting local satirist | US supreme court | The Guardian

Publishers Lose Their Shit After Authors Push Back On Their Attack On Libraries, start fake newsing

On Friday, we wrote about hundreds of authors signing a letter calling out the big publishers’ attacks on libraries (in many, many different ways). The publishers pretend to represent the best interests of the authors, but history has shown over and over again that they do not. They represent themselves, and use the names of authors they exploit to claim the moral high ground they do not hold.

It’s no surprise, then, that the publishers absolutely fucking lost their shit after the letter came out. The Association of American Publishers put out a statement falsely claiming that the letter, put out by Fight for the Future (FftF), and signed by tons of authors from the super famous to the less well known, was actually “disinformation in the Internet Archive case.” And, look, if you’re at the point you’re blaming the Internet Archive for something another group actually did, you know you’ve lost, and you’re just lashing out.

Perhaps much more telling is that the Authors Guild actually put out an even more aggressive statement against Fight for the Future. Now, as best selling author Barry Eisler (who signed onto Fight for the Future’s letter) wrote write here on Techdirt years ago, it’s been clear for a while that the Authors Guild is not actually representing the best interests of authors. It has long been a front group for the publishers themselves.

The Authors Guild’s response to the FftF letter simply confirms this.

First, it claims that authors were misled into signing the letter by an earlier, different draft of the letter. This is simply false. The Authors Guild is making shit up because they just can’t believe that maybe authors actually support this.

They do name one author, Daniel Handler (aka Lemony Snicket), who had signed on, but removed his name before the letter was even published. But… I’m guessing the real reason that probably happened was that the publishers (who learned about the letter before it was published as proved by this email that was sent around prior to the release) FLIPPED OUT when they saw Handler’s name was on the letter. That’s because in their lawsuit against the Internet Archive’s open library project, they rely heavily on the claim that Lemony Snicket’s books are available there.

It seems reasonable to speculate that the publishers saw his name was on the letter, realized it undermined basically the crux of their case, and came down like a ton of bricks on him to pressure him into un-signing the letter. That story, at the very least, makes more sense than someone like Handler somehow being “tricked” into signing a letter that very clearly says what it says.

The Authors Guild’s other claims are equally sketchy.

The lawsuit against Open Library is completely unrelated to the traditional rights of libraries to own and preserve books. It is about Open Library’s attempt to stretch fair use to the breaking point – where any website that calls itself a library could scan books and make them publicly available – a practice engaged in by ebook pirates, not libraries.

This completely misrepresents what the Open Library does, and its direct parallel to any physical library, in that it buys a copy of a book and then can lend out that copy of the book. The courts have already established that scanning books is legal fair use — thanks to a series of cases the Authors Guild brought and lost (embarrassingly so) — and the Open Library then only allows a one-to-one lending of ebooks to actual books. It is functionally equivalent to any other library in any way.

And this is actually important, living at a time when these very same publishers are trying to use twisted interpretations of copyright law, to insist that they can limit how libraries buy and lend ebooks in ways that simply are not possible under the law with regular books.

Also, there’s this bit of nonsense:

The lawsuit is being brought only against IA’s Open Library; it will not impact in any way the Wayback Machine or any other services IA offers.

This is laughable. The lawsuit is asking for millions and millions of dollars from the Internet Archive. If it loses the case, there’s a very strong likelihood that the entire Internet Archive will need to shut down, because it will be unable to pay. Even if the Internet Archive could survive, the idea that this non-profit would be forced to fork over tens of millions of dollars wouldn’t have any impact on other parts of its offerings is laughable.

Fight for the Future has hit back at these accusations:

As expected, corporate publishing industry lobbyists have responded by attempting to undermine the demands of these authors by circulating false and condescending talking points, a frequent tactic lobbyists use to divert attention from the principled actions of activists.

The statement from the Authors Guild specifically asserts, without evidence, that “multiple authors” who signed this letter feel they were “misled”. This assertion is false and we challenge these lobbyists to either provide evidence for their claim or retract it. 

It’s repugnant for industry lobbying associations who claim to represent authors to dismiss the activism of author-signatories like Neil Gaiman, Chuck Wendig, Naomi Klein, Robert McNamee, Baratunde Thurston, Lawrence Lessig, Cory Doctorow, Annalee Newitz, and Douglas Rushkoff, or claim that these authors were somehow misled into signing a brief and clear letter issuing specific demands for the good of all libraries. Corporate publishing lobbyists are free to disagree with the views stated in our letter, but it’s unacceptable for them to make false claims about our organization or the authors who signed.

They also highlight how many authors who signed onto the letter talked about how proud they are that their books are available at the Internet Archive, which is not at all what you would expect if the Open Library was actually about “piracy.”

Author Elizabeth Kate Switaj said when signing: “My most recently published book is on the Internet Archive—and that delights me.”  Dan Gillmor said: “Big Publishing would outlaw public libraries if it could—or at least make it impossible for libraries to buy and lend books as they have traditionally done, to enormous public benefit—and its campaign against the Internet Archive is a step toward that goal.” Sasha Costanza-Cook called publisher’s actions against the Internet Archive “absolutely shameful” and Laura Gibbs said “it’s the library I use most, and I am proud to see my books there.”

They, also, rightly push back on the totally nonsense claims that FftF is “not independent” and is somehow a front for the Internet Archive. I know people at both organizations, and this assertion is laughable. The two organizations agree on many things, but are absolutely and totally independent. This is nothing but a smear from the Authors Guild which can’t even fathom that most authors don’t like the publishers or the way the Authors Guild has become an organization that doesn’t look out for the best interests of all authors, but rather just a few of the biggest names.

Source: Publishers Lose Their Shit After Authors Push Back On Their Attack On Libraries | Techdirt

EA Announces New Anti-Cheat Tech That Operates At The Kernel Level ie takes over your PC, can read and write everything on it

It seems anti-cheat technology is the new DRM. By that I mean that, with the gaming industry diving headfirst into the competitive online gaming scene, the concern over piracy has shifted into a concern over cheating making those online games less attractive to gamers. And because the anti-cheat tech that companies are using is starting to make the gaming public every bit as itchy as it was over DRM.

Consider that Denuvo’s own anti-cheat tech has already started following its DRM path in getting ripped out of games shortly after release after one game got review-bombed over just how intrusive it was. And then consider that Valve had to reassure gamers that its own anti-cheat technology wasn’t watching user’s browsing habits, given that the VAC platform was designed to sniff out kernel-level cheats. One notable Reddit thread had gamers comparing Valve to Electronic Arts as a result.

Which makes it perhaps more interesting that EA recently announced new anti-cheat technology that, yup, operates at the kernel level.

The new kernel-level EA Anti-Cheat (EAAC) tools will roll out with the PC version of FIFA 23 this month, EA announced, and will eventually be added to all of its multiplayer games (including those with ranked online leaderboards). But strictly single-player titles “may implement other anti-cheat technology, such as user-mode protections, or even forgo leveraging anti-cheat technology altogether,” EA Senior Director of Game Security & Anti-Cheat Elise Murphy wrote in a Tuesday blog post.

Unlike anti-cheat methods operating in an OS’s normal “user mode,” kernel-level anti-cheat tools provide a low-level, system-wide view of how cheat tools might mess with a game’s memory or code from the outside. That allows anti-cheat developers to detect a wider variety of cheating threats, as Murphy explained in an extensive FAQ.

The concern from gamers came quickly. You have to keep in mind that none of this occurs without the context of history. There’s a reason why, even today, a good chunk of the gaming public knows all about the Sony rootkit fiasco. They’re aware of the claims that DRM like Denuvo’s affects PC performance. They’ve heard plenty of horror stories about gaming companies, or other software companies, coopting security tools like this in order to slurp up all kinds of PII or user activity for non-gaming purposes. Hell, one of the more prolific antivirus companies recently announced a plan to also use customer machines for crypto-mining.

So it’s in that context that hearing that EA would please like to access the most base-level and sensitive parts of a customer’s PC just to make sure that fewer people can cheat online in FIFA.

Privacy aside, some users might also worry that a new kernel-level driver could destabilize or hamper their system (à la Sony’s infamous music DRM rootkits). But Murphy promised that EAAC is designed to be “as performant and lightweight as possible. EAAC will have negligible impact on your gameplay.”

Kernel-level tools can also provide an appealing new attack surface for low-level security exploits on a user’s system. To account for that, Murphy said her team has “worked with independent, 3rd-party security and privacy assessors to validate EAAC does not degrade the security posture of your PC and to ensure strict data privacy boundaries.” She also promised daily testing and constant report monitoring to address any potential issues that pop up.

Gamers have heard these promises before. Those promises have been broken before. Chiding the public for being concerned at granting kernel-level access to their machines just to keep online gaming less ridden with cheaters is a tough sell.

Source: EA Announces New Anti-Cheat Tech That Operates At The Kernel Level | Techdirt

Firefly Aerospace reaches orbit with new Alpha rocket

A new aerospace company reached orbit with its second rocket launch and deployed multiple small satellites on Saturday.

Firefly Aerospace’s Alpha rocket lifted off from Vandenberg Space Force Base, California, in early morning darkness and arced over the Pacific.

“100% mission success,” Firefly tweeted later.

A day earlier, an attempt to launch abruptly ended when the countdown reached zero. The first-stage engines ignited but the rocket automatically aborted the liftoff.

The rocket’s payload included multiple designed for a variety of technology experiments and demonstrations, as well as educational purposes.

The mission, dubbed “To The Black,” was the company’s second demonstration flight of its entry into the market for small satellite launchers.

The first Alpha was launched from Vandenberg on Sept. 2, 2021, but did not reach orbit.

One of the four first-stage engines shut down prematurely but the rocket continued upward on three engines into the supersonic realm where it tumbled out of control.

The rocket was then intentionally destroyed by an explosive flight termination system.

Firefly Aerospace said the premature shutdown was traced to an electrical issue, but that the rocket had otherwise performed well and useful data was obtained during the nearly 2 1/2 minutes of flight.

Alpha is designed to carry payloads weighing as much as 2,579 pounds (1,170 kilograms) to low Earth .

Other competitors in the burgeoning small-launch market include Rocket Lab and Virgin Orbit, both headquartered in Long Beach, California.

Firefly Aerospace, based in Cedar Park, Texas, is also planning a larger , a vehicle for in-space operations and a lander for carrying NASA and commercial payloads to the surface of the moon.

Source: Firefly Aerospace reaches orbit with new Alpha rocket

Australian Optus telco data debacle gets worse and worse – non-existent security and no govt regulation

[…]

The alleged hacker – who threatened to sell the data unless a ransom was paid – took names, birth dates, phone numbers, addresses, and passport, healthcare and drivers’ license details from Optus, the country’s second-largest telecommunications company.

Of the 10 million people whose data was exposed, almost 3 million had crucial identity documents accessed.

Across the country, current and former customers have been rushing to change their official documents as the US Federal Bureau of Investigation joined Australia’s police, cybersecurity, and spy agencies to investigate the breach.

The Australian government is looking at overhauling privacy laws after it emerged that Optus – a subsidiary of global telecommunications firm Singtel – had kept private information for years, even after customers had cancelled their contracts.

It is also considering a European Union-style system of financial penalties for companies that fail to protect their customers.

An error-riddled message from someone claiming to be the culprit and calling themselves “Optusdata” demanded a relatively modest US$1m ransom for the data.

[…]

That demand was followed by a threat to release the records of 10,000 peopleper day until the money was paid. A batch of 10,000 files was later published online.

As Optus and the federal government dealt with the fallout, the alleged hacker had a change of mind and offered their “deepest apology”.

“Too many eyes,” they said. “We will not sale data to anyone. We cant if we even want to: personally deleted data.”

Optus chief Kelly Bayer Rosmarin initially claimed the company had fallen prey to a sophisticated attack and said the associated IP address was “out of Europe”. She said police were “all over” the apparent release of information and told ABC radio that the security breach was “not as being portrayed”.’

Experts have said Optus had an application programming interface (API) online that did not need authorisation or authentication to access customer data. “Any user could have requested any other user’s information,” Corey J Ball, senior manager of cyber security consulting for Moss Adams, said.

[…]

Optus ‘left the window open’

The cyber security minister, Clare O’Neill, has questioned why Optus had held on to that much personal information for so long.

She also scoffed at the idea the hack was sophisticated.

“What is of concern for us is how what is quite a basic hack was undertaken on Optus,” she told the ABC. “We should not have a telecommunications provider in this country which has effectively left the window open for data of this nature to be stolen.”

[…]

Asked about Rosmarin’s comments that the attack was sophisticated, O’Neill said: “Well, it wasn’t.”

On Friday, prime minister Anthony Albanese said what had happened was “unacceptable”. He said Optus had agreed to pay for replacement passports for those affected.

“Australian companies should do everything they can to protect your data,” Albanese said.

“That’s why we’re also reviewing the Privacy Act – and we’re committed to making privacy laws stronger.”

[…]

Australia currently has a $2.2m limit on corporate penalties, and there are calls for harsher penalties to encourage companies to do everything they can to protect consumers.

In the EU, the General Data Protection Regulation means companies are liable for up to 4% of the company’s revenue. Optus’s revenue last financial year was more than $7bn.

[…]

Source: The biggest hack in history: Australians scramble to change passports and driver licences after Optus telco data debacle | Optus | The Guardian

If the government has no legal incentive to tighten security and privacy, then companies won’t invest in it.

Blizzard really really wants your phone number to play its games – personal data grab and security risk

When Overwatch 2 replaces the original Overwatch on Oct. 4, players will be required to link a phone number to their Battle.net accounts. If you don’t, you won’t be able to play Overwatch 2 — even if you’ve already purchased Overwatch. The same two-factor step, called SMS Protect, will also be used on all Call of Duty: Modern Warfare 2 accounts when that game launches, and new Call of Duty: Modern Warfare accounts.

Blizzard Entertainment announced SMS Protect and other safety measures ahead of Overwatch 2’s release. Blizzard said it implemented these controls because it wanted to “protect the integrity of gameplay and promote positive behavior in Overwatch 2.”

[…]

SMS Protect is a security feature that has two purposes: to keep players accountable for what Blizzard calls “disruptive behavior,” and to protect accounts if they’re hacked. It requires all Overwatch 2 players to attach a unique phone number to their account. Blizzard said SMS Protect will target cheaters and harassers; if an account is banned, it’ll be harder for them to return to Overwatch 2. You can’t just enter any old phone number — you actually have to have access to a phone receiving texts to that number to get into your account.

[…]

Blizzard said these phone notifications will be used to approve password resets — meaning someone else won’t be able to change your password without the notification code it’ll send to your mobile phone. Blizzard said it will also send you a text message if your account is locked out after a “a suspicious login attempt,” or if your password or security features are changed.

Source: Overwatch 2 SMS Protect: What is it? Why does Blizzard require my phone number? – Polygon

So this is a piece of ‘real’ information you have to give them – but what if you move country and mobile phone? what if you lose your mobile? what if they get hacked (again) and take your number? It’s either something that does get changed or is very hard to change. It shows you that basically Blizzard sees your data as something they can grab onto for free – you are  their product. Even though the games are technically free to play, in practice they make a killing off the items you buy ingame in order to be cool

They will probably get away with it though, just as they got away with installing spyware on your PC or when you attend their events under pretty flimsy pretenses.

FCC rules Satellites must be deorbited within five years of completing missions instead of 25 years

The US Federal Communications Commission (FCC) has adopted new rules to address the growing risk of “space junk” or abandoned satellites, rockets and other debris. The new “5-year-rule” will require low-Earth operators to deorbit their satellites within five years following the completion of missions. That’s significantly less time than the previous guideline of 25 years.

“But 25 years is a long time,” FCC Chairwoman Jessica Rosenworcel said in a statement. “There is no reason to wait that long anymore, especially in low-earth orbit. The second space age is here. For it to continue to grow, we need to do more to clean up after ourselves so space innovation can continue to respond.”

Rosenworcel noted that around 10,000 satellites weighing “thousands of metric tons” have been launched since 1957, with over half of those now defunct. The new rule “will mean more accountability and less risk of collisions that increase orbital debris and the likelihood of space communication failures.”

[…]

Source: Satellites must be deorbited within five years of completing missions, FCC rules | Engadget

Why 5 years? it’s still too long!

Researchers detect the first definitive proof of elusive sea level fingerprints

When ice sheets melt, something strange and highly counterintuitive happens to sea levels.

It works basically like a seesaw. In the area close to where theses masses of glacial ice melt, fall. Yet thousands of miles away, they actually rise. It largely happens because of the loss of a gravitational pull toward the , causing the water to disperse away. The patterns have come to be known as fingerprints since each melting glacier or ice sheet uniquely impacts sea level. Elements of the concept—which lies at the heart of the understanding that don’t rise uniformly—have been around for over a century and modern sea level science has been built around it. But there’s long been a hitch to the widely accepted theory. A sea level fingerprint has never definitively been detected by researchers.

A team of scientists—led by Harvard alumna Sophie Coulson and featuring Harvard geophysicist Jerry X. Mitrovica—believe they have detected the first. The findings are described in a new study published Thursday in Science. The work validates almost a century of sea level science and helps solidify confidence in models predicting future sea level rise.

[…]

Sea level fingerprints have been notoriously difficult to detect because of the major fluctuations in ocean levels brought on by changing tides, currents, and winds. What makes it such a conundrum is that researchers are trying to detect millimeter level motions of the water and link them to melting glaciers thousands of miles away.

[…]

The new study uses newly released from a European marine monitoring agency that captures over 30 years of observations in the vicinity of the Greenland Ice Sheet and much of the ocean close to the middle of Greenland to capture the seesaw in ocean levels from the fingerprint.

The satellite data caught the eye of Mitrovica and colleague David Sandwell of the Scripps Institute of Oceanography. Typically, satellite records from this region had only extended up to the southern tip of Greenland, but in this new release the data reached ten degrees higher in latitude, allowing them to eyeball a potential hint of the seesaw caused by the fingerprint.

[…]

Coulson quickly collected three decades worth of the best observations she could find on ice height change within the Greenland Ice Sheet as well as reconstructions of glacier height change across the Canadian Arctic and Iceland. She combined these different datasets to create predictions of sea level change in the region from 1993 to 2019, which she then compared with the new satellite data. The fit was perfect. A one-to-one match that showed with more than 99.9% confidence that the pattern of sea level change revealed by the satellites is a fingerprint of the melting ice sheet.

[…]

Source: Researchers detect the first definitive proof of elusive sea level fingerprints

EU proposes rules making it easier to sue AI systems

BRUSSELS, Sept 28 (Reuters) – The European Commission on Wednesday proposed rules making it easier for individuals and companies to sue makers of drones, robots and other products equipped with artificial intelligence software for compensation for harm caused by them.

The AI Liability Directive aims to address the increasing use of AI-enabled products and services and the patchwork of national rules across the 27-country European Union.

Under the draft rules, victims can seek compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology, or for discrimination in a recruitment process using AI.

You can find the EU publication here: New liability rules on products and AI to protect consumers and foster innovation

“We want the same level of protection for victims of damage caused by AI as for victims of old technologies,” Justice Commissioner Didier Reynders told a news conference.

The rules lighten the burden of proof on victims with a “presumption of causality”, which means victims only need to show that a manufacturer or user’s failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit.

Under a “right of access to evidence”, victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and the fault that caused the damage.

The Commission also announced an update to the Product Liability Directive that means manufacturers will be liable for all unsafe products, tangible and intangible, including software and digital services, and also after the products are sold.

Users can sue for compensation when software updates render their smart-home products unsafe or when manufacturers fail to fix cybersecurity gaps. Those with unsafe non-EU products will be able to sue the manufacturer’s EU representative for compensation.

The AI Liability Directive will need to be agreed with EU countries and EU lawmakers before it can become law.

Source: EU proposes rules making it easier to sue drone makers, AI systems | Reuters

This is quite interesting, especially from a perspective of people who think that AIs should get more far reaching rights, eg the possibility of owning their own copyrights.

Hackers Are Hypervisor Hijacking in the wild now

For decades, virtualization software has offered a way to vastly multiply computers’ efficiency, hosting entire collections of computers as “virtual machines” on just one physical machine. And for almost as long, security researchers have warned about the potential dark side of that technology: theoretical “hyperjacking” and “Blue Pill” attacks, where hackers hijack virtualization to spy on and manipulate virtual machines, with potentially no way for a targeted computer to detect the intrusion. That insidious spying has finally jumped from research papers to reality with warnings that one mysterious team of hackers has carried out a spree of “hyperjacking” attacks in the wild.

Today, Google-owned security firm Mandiant and virtualization firm VMware jointly published warnings that a sophisticated hacker group has been installing backdoors in VMware’s virtualization software on multiple targets’ networks as part of an apparent espionage campaign. By planting their own code in victims’ so-called hypervisors—VMware software that runs on a physical computer to manage all the virtual machines it hosts—the hackers were able to invisibly watch and run commands on the computers those hypervisors oversee. And because the malicious code targets the hypervisor on the physical machine rather than the victim’s virtual machines, the hackers’ trick multiplies their access and evades nearly all traditional security measures designed to monitor those target machines for signs of foul play.

“The idea that you can compromise one machine and from there have the ability to control virtual machines en masse is huge,” says Mandiant consultant Alex Marvi. And even closely watching the processes of a target virtual machine, he says, an observer would in many cases see only “side effects” of the intrusion, given that the malware carrying out that spying had infected a part of the system entirely outside its operating system.

[…]

In a technical writeup, Mandiant describes how the hackers corrupted victims’ virtualization setups by installing a malicious version of VMware’s software installation bundle to replace the legitimate version. That allowed them to hide two different backdoors, which Mandiant calls VirtualPita and VirtualPie, in VMware’s hypervisor program known as ESXi. Those backdoors let the hackers surveil and run their own commands on virtual machines managed by the infected hypervisor. Mandiant notes that the hackers didn’t actually exploit any patchable vulnerability in VMware’s software, but instead used administrator-level access to the ESXi hypervisors to plant their spy tools. That admin access suggests that their virtualization hacking served as a persistence technique, allowing them to hide their espionage more effectively long-term after gaining initial access to the victims’ network through other means.

[…]

Source: Mystery Hackers Are ‘Hyperjacking’ Targets for Insidious Spying | WIRED

CIA betrayed informants with shoddy covert comms websites

For almost a decade, the US Central Intelligence Agency communicated with informants abroad using a network of websites with hidden communications capabilities.

The idea being: informants could use secret features within innocent-looking sites to quietly pass back information to American agents. So poorly were these 885 front websites designed, though, according to security research group Citizen Lab and Reuters, that they betrayed those using them to spy for the CIA.

Citing a year-long investigation into the CIA’s handling of its informants, Reuters on Thursday reported that Iranian engineer Gholamreza Hosseini had been identified as a spy by Iranian intelligence, thanks to CIA negligence.

“A faulty CIA covert communications system made it easy for Iranian intelligence to identify and capture him,” the Reuters report stated.

Word of a catastrophic failure in CIA operational security initially surfaced in 2018, when Yahoo! News reporters Zach Dorfman and Jenna McLaughlin revealed “a compromise of the agency’s internet-based covert communications system used to interact with its informants.”

The duo’s report indicated that the system involved a website and claimed “more than two dozen sources died in China in 2011 and 2012” as a result of the compromise. Also, 30 operatives in Iran were said to have been identified by Iranian intelligence, fewer of whom were killed as a consequence of discovery than in China.

Blocks of sequential IP addresses registered to apparently fictitious US companies were used to host some of the websites

Reuters found one of the CIA websites, iraniangoals[.]com, in the Internet Archive and told Citizen Lab about the site earlier this year. Bill Marczak, from Citizen Lab, and Zach Edwards, from analytics consultancy Victory Medium, subsequently examined the website and deduced that it had been part of a CIA-run network of nearly 900 websites, localized in at least 29 languages, and intended for viewing in at least 36 countries.

These websites, said to have operated between 2004 and 2013, presented themselves as harmless sources of news, weather, sports, healthcare, or other information. But they are alleged to have facilitated covert communications, and to have done serious harm to the US intelligence community and to those risking their lives to help the United States.

“The websites included similar Java, JavaScript, Adobe Flash, and CGI artifacts that implemented or apparently loaded covert communications apps,” Citizen Lab explains in its report. “In addition, blocks of sequential IP addresses registered to apparently fictitious US companies were used to host some of the websites. All of these flaws would have facilitated discovery by hostile parties.”

The websites were designed to look like common commercial publications but included secret triggering mechanisms to open a covert communication channel. For example, the supposed search box on iraniangoals[.]com is actually a password input field to access such its hidden comms functionality – which you’d never guess unless you inspected the website code to see the input field identified as type="password" or unless the conversion of text input into hidden • characters gave it away.

Entering the appropriate password opened a messaging interface that spies could use to communicate.

Citizen Lab says it has limited the details contained in its report because some of the websites point to former and possibly still active intelligence agents. It says it intends to disclose some details to US government oversight bodies. The security group blames the CIA’s “reckless infrastructure” for the alleged agent deaths. Zach Edwards put it more bluntly on Twitter.

“Sloppy ass website widget architecture plus ridiculous hosting/DNS decisions by CIA/CIA contractors likely resulted in dozens of CIA spies being killed,” he said.

What makes the infrastructure ridiculous or reckless is that many of the websites had similarities with others in the network and that their hosting infrastructure appears to have been purchased in bulk from the same internet providers and to have often shared the same server space.

“The result was that numerical identifiers, or IP addresses, for many of these websites were sequential, much like houses on the same street,” Reuters explained.

Such basic errors continue to trip up spy agencies. Investigative research group Bellingcat, for example, has used the sequential numbering of passports to help identify the fake personas of Russian GRU agents. It described this blunder as “terrible spycraft.”

[…]

Source: CIA betrayed informants with shoddy covert comms websites • The Register

Neil Gaiman, Cory Doctorow And Other Authors Publish Letter Protesting Lawsuit Against Internet Library

A group of authors and other creative professionals are lending their names to an open letter protesting publishers’ lawsuit against the Internet Archive Library, characterizing it as one of a number of efforts to curb libraries’ lending of ebooks.

Authors including Neil Gaiman, Naomi Klein, and Cory Doctorow lent their names to the letter, which was organized by the public interest group Fight for the Future.

“Libraries are a fundamental collective good. We, the undersigned authors, are disheartened by the recent attacks against libraries being made in our name by trade associations such as the American Association of Publishers and the Publishers Association: undermining the traditional rights of libraries to own and preserve books, intimidating libraries with lawsuits, and smearing librarians,” the letter states.

A group of publishers sued the Internet Archive in 2020, claiming that its open library violates copyright by producing “mirror image copies of millions of unaltered in-copyright works for which it has no rights” and then distributes them “in their entirety for reading purposes to the public for free, including voluminous numbers of books that are commercially available.” They also contend that the archive’s scanning undercuts the market for e-books.

The Internet Archive says that its lending of the scanned books is akin to a traditional library. In its response to the publishers’ lawsuit, it warns of the ramifications of the litigation and claims that publishers “would like to force libraries and their patrons into a world in which books can only be accessed, never owned, and in which availability is subject to the rightsholders’ whim.”

The letter also calls for enshrining “the right of libraries to permanently own and preserve books, and to purchase these permanent copies on reasonable terms, regardless of format,” and condemns the characterization of library advocates as “mouthpieces” for big tech.

“We fear a future where libraries are reduced to a sort of Netflix or Spotify for books, from which publishers demand exorbitant licensing fees in perpetuity while unaccountable vendors force the spread of disinformation and hate for profit,” the letter states.

The litigation is in the summary judgment stage in U.S. District Court in New York.

Hachette Book Group, HarperCollins Publishers, John Wiley & Sons Inc and Penguin Random House are plaintiffs in the lawsuit.

[…]

Source: Authors Publish Letter Protesting Lawsuit Against Internet Library – Deadline

Open internet at stake in UN ITU secretary-general election

[…]  this year’s event has become a geopolitical football – and possibly a turning point for internet governance – thanks to the two candidates running in an election for the position of ITU secretary-general.

[…]

The USA has put forward Doreen Bogdan-Martin for the gig.

[…]

Russia has nominated Rashid Ismailov for the job. A former deputy minister at Russia’s Ministry of Telecom and Mass Communication, Ismailov has also worked for Huawei.

Speaking of Huawei, in 2019 it and China Mobile, China Unicom, and China’s Ministry of Industry and Information Technology (MIIT), did something unexpected: submit a proposal to the ITU for a standard called New IP to supersede Internet Protocol. The entities behind New IP claimed it is needed because existing protocols don’t include sufficient quality-of-service guarantees, so netizens will struggle to handle latency-sensitive future applications, and also because current standards lack intrinsic security.

New IP is controversial for two reasons.

One is that the ITU does not oversee IP (as in, Internet Protocol, the standard that helps glue our modern communications together). That’s the IETF’s job. The IETF is a multi-stakeholder organization that accepts ideas from anywhere – the QUIC protocol that’s potentially on the way to replacing TCP originated at Google but was developed into a standard by the IETF. The ITU is a United Nations body so represents nation-states.

The other is that New IP proposes a Many Networks – or ManyNets – approach to global internetworking, with distinct, individual networks allowed to set their own rules on access to systems and content. Some of the rules envisioned under New IP could require individuals to register for network access, and allow central control – even shutdowns – of traffic on a national network.

New IP is of interest to those who like the idea of a “sovereign internet” such as China’s, on which the government conducts pervasive surveillance and extensive censorship.

China argues it can do as it pleases within its borders. But New IP has the potential to make some of the controls China uses on its local internet part of global protocols.

Another nation increasingly interested in a sovereign internet is Russia, which was not particularly tolerant of free speech before its illegal invasion of Ukraine and has since implemented sweeping censorship across its patch of the internet.

The possibility of Rashid Ismailov being elected ITU boss, and potentially driving adoption of censorship-enabling New IP around the world, therefore has plenty of people worried – not least because in 2021 Russia and China issued a joint statement that called for “all States [to] have equal rights to participate in global-network governance, increasing their role in this process and preserving the sovereign right of States to regulate the national segment of the Internet.”

[…]

In an email to The Register sent in a personal capacity, Lars Eggert, chair of the IETF, stated: “I personally would wish for the ITU to reaffirm its commitment to the consensus-based multi-stakeholder model that has been the foundation for the success of the Internet, and is at the heart of the open standards development model the IETF and other standards developing organizations follow when improving the overall Internet architecture and its protocol components.”

He added, “I personally would like to see an ITU leadership emerge that strengthens the ITU’s commitment to the above-mentioned approach to Internet evolution.”

Eggert pointed out an official IETF response to New IP that criticizes its potential for central control and argues that existing IETF processes and projects already address the issues the China-derived proposal seeks to address.

The Internet Society, the non-profit that promotes open internet development, is also concerned about the proceedings at the ITU event.

“Plenipotentiary-22 could be a turning point for the Internet,” the organization stated in a mail to The Register. “The multi-stakeholder Internet governance model and principles are being called into question by some ITU Member States and there are multilateral processes aiming to position governments as the main decision-makers regarding Internet governance.”

The society told The Register: “Internet technical standards must remain within the domain of the appropriate standards bodies, such as the IETF, where work that intends to update, amend, or develop Internet technical standards must be presented.”

[…]

Source: Open internet at stake in UN ITU secretary-general election

Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law Designed for White Supremacists

Last year, I tried to create a “test suite” of websites that any new internet regulation ought to be “tested” against. The idea was that regulators were so obsessively focused on the biggest of the big guys (i.e., Google, Meta) that they never bothered to realize how it might impact other decently large websites that involved totally different setups and processes. For example, it’s often quite impossible to figure out how a regulation about Google and Facebook content moderation would work on sites like Wikipedia, Github, Discord, or Reddit.

Last week, we called out that Texas’s HB 20 social media content moderation law almost certainly applies to sites like Wikipedia and Reddit, yet I couldn’t see any fathomable way in which those sites could comply, given that so much of the moderation on each is driven by users rather than the company. It’s been funny watching supporters of the law try to insist that this is somehow easy for Wikipedia (probably the most transparent larger site on the internet) to comply with by being “more transparent and open access.”

If you somehow can’t see that tweet or screenshot, it’s a Trumpist defender of the law responding to someone asking how Wikipedia can comply with the law, saying:

Wikipedia would have to offer more transparent and open access to their platform, which would allow truth to flourish over propaganda there? Is that what you’re worried about, or what is it?

To which a reasonably perplexed Wikipedia founder Jimmy Wales rightly responds:

What on earth are you talking about? It’s like you are writing from a different dimension.

Anyway… it seems some folks on Reddit are realizing the absurdity of the law and trying to demonstrate it in the most internety way possible. Michael Vario alerts us that the r/PoliticalHumor subreddit is “messing with Texas” by requiring every comment to include the phrase “Greg Abbott is a little piss baby” or be deleted in a fit of content moderation discrimination in violation of the HB20 law against social media “censorship.”

Until further notice, all comments posted to this subreddit must contain the phrase “Greg Abbott is a little piss baby”

There is a reason we’re doing this, the state of Texas has passed H.B. 20Full text here, which is a ridiculous attempt to control social media. Just this week, an appeals court reinstated the law after a different court had declared it unconstitutional. Vox has a pretty easy to understand writeup, but the crux of the matter is, the law attempts to force social media companies to host content they do not want to host. The law also requires moderators to not censor any specific point of view, and the language is so vague that you must allow discussion about human cannibalization if you have users saying cannibalization is wrong. Obviously, there are all sorts of real world problems with it, the obvious ones being forced to host white nationalist ideology or insurrectionist ideation. At the risk of editorializing, that might be a feature, not a bug for them.

Anyway, Reddit falls into a weird category with this law. The actual employees of the company Reddit do, maybe, one percent of the moderation on the site. The rest is handled by disgusting jannies volunteer moderators, who Reddit has made quite clear over the years, aren’t agents of Reddit (mainly so they don’t lose millions of dollars every time a mod approves something vaguely related to Disney and violates their copyright). It’s unclear whether we count as users or moderators in relation to this law, and none of us live in Texas anyway. They can come after all 43 dollars in my bank account if they really want to, but Virginia has no obligation to extradite or anything.

We realized what a ripe situation this is, so we’re going to flagrantly break this law. Partially to raise awareness of the bullshit of it all, but mainly because we find it funny. Also, we like this Constitution thing. Seems like it has some good ideas.

They also include a link to the page where people can file a complaint with the Texas Attorney General, Ken Paxton, asking him to investigate whether the deletion of any comments that don’t claim that his boss, Governor Greg Abbott, is “a little piss baby” is viewpoint discrimination in violation of the law.

Source: Subreddit Discriminates Against Anyone Who Doesn’t Call Texas Governor Greg Abbott ‘A Little Piss Baby’ To Highlight Absurdity Of Content Moderation Law | Techdirt

New theory concludes that the origin of life on Earth-like planets is likely

Does the existence of life on Earth tell us anything about the probability of abiogenesis—the origin of life from inorganic substances—arising elsewhere? That’s a question that has confounded scientists, and anyone else inclined to ponder it, for some time.

A widely accepted argument from Australian-born astrophysicist Brandon Carter argues that the selection effect of our own existence puts constraints on our observation. Since we had to find ourselves on a planet where abiogenesis occurred, then nothing can be inferred about the probability of life elsewhere based on this knowledge alone.

At best, he argued, the knowledge of life on Earth is of neutral value. Another way of looking at it is that Earth can’t be considered a typical Earth-like planet because it hasn’t been selected at random from the set of all Earth-like .

However, a new paper by Daniel Whitmire, a retired astrophysicist who currently teaches mathematics at the U of A, is arguing that Carter used faulty logic. Though Carter’s theory has become widely accepted, Whitmire argues that it suffers from what’s known as “the old evidence problem” in Bayesian confirmation theory, which is used to update a theory or hypothesis in light of new evidence.

After giving a few examples of how this formula is employed to calculate probabilities and what role old evidence plays, Whitmire turns to what he calls the analogy.

As he explains, “One could argue, like Carter, that I exist regardless of whether my conception was hard or easy, and so nothing can be inferred about whether my conception was hard or easy from my existence alone.”

In this analogy, “hard” means contraception was used. “Easy” means no contraception was used. In each case, Whitmire assigns values to these propositions.

Whitmire continues, “However, my existence is old evidence and must be treated as such. When this is done the conclusion is that it is much more probable that my conception was easy. In the abiogenesis case of interest, it’s the same thing. The existence of life on Earth is old evidence and just like in the conception analogy the probability that abiogenesis is easy is much more probable.”

In other words, the evidence of life on Earth is not of neutral value in making the case for life on similar planets. As such, our life suggests that life is more likely to emerge on other Earth-like planets—maybe even on the recent “super-Earth” type planet, LP 890-9b, discovered 100 away.

Those with a taste for can read Whitmire’s paper, “Abiogensis: The Carter Argument Reconsidered,” in the International Journal of Astrobiology.


Explore further

The implications of cosmic silence


More information: Daniel P. Whitmire, Abiogenesis: the Carter argument reconsidered, International Journal of Astrobiology (2022). DOI: 10.1017/S1473550422000350

Source: New theory concludes that the origin of life on Earth-like planets is likely

Australia To Overhaul Privacy Laws After Optus data breach exposes 40% of AU population

Following one of the biggest data breaches in Australian history, the government of Australia is planning to get stricter on requirements for disclosure of cyber attacks. From a report: On Monday, Prime Minister Anthony Albanese told Australian radio station 4BC that the government intended to overhaul privacy legislation so that any company suffering a data breach was required to share details with banks about customers who had potentially been affected in an effort to minimize fraud. Under current Australian privacy legislation, companies are prevented from sharing such details about their customers with third parties.

The policy announcement was made in the wake of a huge data breach last week, which affected Australia’s second-largest telecom company, Optus. Hackers managed to access a vast amount of potentially sensitive information on up to 9.8 million Optus customers — close to 40 percent of the Australian population. Leaked data included name, date of birth, address, contact information, and in some cases, driver’s license or passport ID numbers. Reporting from ABC News Australia suggested the breach may have resulted from an improperly secured API that Optus developed to comply with regulations around providing users multifactor authentication options.

Source: Australia To Overhaul Privacy Laws After Massive Data Breach – Slashdot

NSA whistleblower Edward Snowden granted Russian citizenship

On Monday, Vladimir Putin, President of the Russian Federation, issued a decree [PDF, not secure] naming Snowden (#53), among others, as being granted the boon of Russian citizenship.

[…]

While Snowden’s status as a whistleblower is disputed by the US government, the surveillance apparatus he exposed – the bulk collection of US phone records – was found to be unlawful.

Snowden has been living in Russia since 2013 when the US charged him with espionage and he flew from Hong Kong to Moscow’s Sheremetyevo International Airport with the help of WikiLeaks and ended up stranded in Russia with a canceled passport. He was granted asylum in Russia and temporary residency until October 2020, when he became a permanent resident. He and his wife Lindsay reportedly applied for citizenship the following month.

The citizenship comes at an awkward time. Putin last week signed what he described as a “partial mobilization” order to conscript soldiers for Russia’s invasion of Ukraine. The war has resulted in severe losses for the Russian military, which now needs to replenish its forces. Per its regulations, Russia can call up men and women between the ages of 18 and 60, even reportedly recruiting those in prison to fight.

The Russian callup is supposed to be for citizens with military training, which Snowden has. He enlisted in the US Army but was invalided out due to injuries suffered during special forces training.

[…]

Source: NSA whistleblower Edward Snowden granted Russian citizenship • The Register

Charted: 40 Years of Global Energy Production, by Country

1. Fossil Fuels

Biggest Producers of Fossil Fuel since 1980

View the full-size infographic

While the U.S. is a dominant player in both oil and natural gas production, China holds the top spot as the world’s largest fossil fuel producer, largely because of its significant production and consumption of coal.

Over the last decade, China has used more coal than the rest of the world, combined.

However, it’s worth noting that the country’s fossil fuel consumption and production have dipped in recent years, ever since the government launched a five-year plan back in 2014 to help reduce carbon emissions.

2. Nuclear Power

Biggest Producers of Nuclear Energy since 1980

View the full-size infographic

The U.S. is the world’s largest producer of nuclear power by far, generating about double the amount of nuclear energy as France, the second-largest producer.

While nuclear power provides a carbon-free alternative to fossil fuels, the nuclear disaster in Fukushima caused many countries to move away from the energy source, which is why global use has dipped in recent years.

Despite the fact that many countries have recently pivoted away from nuclear energy, it still powers about 10% of the world’s electricity. It’s also possible that nuclear energy will play an expanded role in the energy mix going forward, since decarbonization has emerged as a top priority for nations around the world.

3. Renewable Energy

Biggest Producers of Renewable Energy

View the full-size infographic

Source: Charted: 40 Years of Global Energy Production, by Country

This Controversial Artist Matches Influencer Photoshoots With Surveillance Footage

It’s an increasingly common sight on vacation, particularly in tourist destinations: An influencer sets up in front of a popular local landmark, sometimes even using props (coffee, beer, pets) or changing outfits, as a photographer or self-timed camera snaps away. Others are milling around, sometimes watching. But often, unbeknownst to everyone involved, another device is also recording the scene: a surveillance camera.

Belgian artist Dries Depoorter is exploring this dynamic in his controversial new online exhibit, The Followers, which he unveiled last week. The art project places static Instagram images side-by-side with video from surveillance cameras, which recorded footage of the photoshoot in question.

On its face, The Followers is an attempt, like many other studies, art projects and documentaries in recent years, to expose the staged, often unattainable ideals shown in many Instagram and influencer photos posted online. But The Followers also tells a darker story: one of increasingly worrisome privacy concerns amid an ever-growing network of surveillance technology in public spaces. And the project, as well as the techniques used to create it, has sparked both ethical and legal controversy.

To make The Followers, Depoorter started with EarthCam, a network of publicly accessible webcams around the world, to record a month’s worth of footage in tourist attractions like New York City’s Times Square and Dublin’s Temple Bar Pub. Then he enlisted an artificial intelligence (A.I.) bot, which scraped public Instagram photos taken in those locations, and facial-recognition software, which paired the Instagram images with the real-time surveillance footage.

Depoorter calls himself a “surveillance artist,” and this isn’t his first project using open-source webcam footage or A.I. Last year, for a project called The Flemish Scrollers, he paired livestream video of Belgian government proceedings with an A.I. bot he built to determine how often lawmakers were scrolling on their phones during official meetings.

“The idea [for The Followers] popped in my head when I watched an open camera and someone was taking pictures for like 30 minutes,” Depoorter tells Vice’s Samantha Cole. He wondered if he’d be able to find that person on Instagram.

[…]

The Followers has also hit some legal snags since going live. The project was originally up on YouTube, but EarthCam filed a copyright claim, and the piece has since been taken down. Depoorter tells Hyperallergic that he’s attempting to resolve the claim and get the videos re-uploaded. (The project is still available to view on the official website and the artist’s Twitter).

Depoorter hasn’t replied directly to much of the criticism, but he tells Input he wants the art to speak for itself. “I know which questions it raises, this kind of project,” he says. “But I don’t answer the question itself. I don’t want to put a lesson into the world. I just want to show the dangers of new technologies.”

Source: This Controversial Artist Matches Influencer Photos With Surveillance Footage | Smart News| Smithsonian Magazine

Cybersickness Could Spell an Early Death for the Metaverse and Virtual Reality

Luis Eduardo Garrido couldn’t wait to test out his colleague’s newest creation. Garrido, a psychology and methodology researcher at Pontificia Universidad Católica Madre y Maestra in the Dominican Republic, drove two hours between his university’s campuses to try a virtual reality experience that was designed to treat obsessive-compulsive disorder and different types of phobias. But a couple of minutes after he put on the headset, he could tell something was wrong.

“I started feeling bad,” Garrido told The Daily Beast. He was experiencing an unsettling bout of dizziness and nausea. He tried to push through but ultimately had to abort the simulation almost as soon as he started. “Honestly, I don’t think I lasted five minutes trying out the application,” he said.

Garrido had contracted cybersickness, a form of motion sickness that can affect users of VR technology. It was so severe that he worried about his ability to drive home, and it took hours for him to recover from the five-minute simulation. Though motion sickness has afflicted humans for thousands of years, cybersickness is a much newer condition. While this means that many of its causes and symptoms are understood, other basic questions—like how common cybersickness is, and whether there are ways to fully prevent it—are only just starting to be studied.

After Garrido’s experience, a colleague told him that only around 2 percent of people feel cybersickness. But at a presentation for prospective students, Garrido watched as volunteers from the audience walked to the front of an auditorium to demo a VR headset—only to return shakily to their seats.

“I could see from afar that they were getting sweaty and kind of uncomfortable,” he recalled. “I said to myself, ‘Maybe I’m not the only one.’”

[…]

In order to make VR more accessible and affordable, companies are making devices smaller and running them on less powerful processors. But these changes introduce dizzying graphics—which inevitably causes more people to experience cybersickness.

At the same time, a growing body of research suggests cybersickness is vastly more pervasive than previously thought—perhaps afflicting more than half of all potential users.

[…]

Garrido and his team decided to run their own study, recruiting 92 people to try the same VR program that first made him sick.

[…]

In sharp contrast to the 2 percent estimate Garrido had been told, the results from his study, published earlier this year, indicated that more than 65 percent of people experienced symptoms of cybersickness, and more than one-third of these people experienced severe symptoms. Twenty-two participants decided to stop the simulation before the 10 minutes were up.

[…]

Cybersickness doesn’t just arise from the controls of a VR experience. It can be built into the fabric of hardware (individual headsets) and software (experiences, apps, and simulations). Kyle Ringgenberg, an AR and VR developer and the co-founder of software company Dimension X, said that there are two major sensory conflicts that lead to cybersickness in VR. The first is the same brain-body mismatch that leads to car and seasickness, but the second is a different physiological response—and potentially even harder to fix. When we look out at the world in front of us, our eyes automatically focus on an object based on its perceived distance from us. A VR headset projects images a set distance away from a viewer, but when a virtual object appears close, it may seem blurry since the person’s eyes are trying to focus on it as if it truly were.

[…]

Source: Cybersickness Could Spell an Early Death for the Metaverse and Virtual Reality

NVIDIA Builds AI That Creates 3D Objects for Virtual Worlds

The massive virtual worlds created by growing numbers of companies and creators could be more easily populated with a diverse array of 3D buildings, vehicles, characters and more — thanks to a new AI model from NVIDIA Research.

Trained using only 2D images, NVIDIA GET3D generates 3D shapes with high-fidelity textures and complex geometric details. These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing.

The generated objects could be used in 3D representations of buildings, outdoor spaces or entire cities, designed for industries including gaming, robotics, architecture and social media.

GET3D can generate a virtually unlimited number of 3D shapes based on the data it’s trained on. Like an artist who turns a lump of clay into a detailed sculpture, the model transforms numbers into complex 3D shapes.

With a training dataset of 2D car images, for example, it creates a collection of sedans, trucks, race cars and vans. When trained on animal images, it comes up with creatures such as foxes, rhinos, horses and bears. Given chairs, the model generates assorted swivel chairs, dining chairs and cozy recliners.

“GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, vice president of AI research at NVIDIA, who leads the Toronto-based AI lab that created the tool. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.”

[…]

GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. The larger, more diverse the training dataset it’s learned from, the more varied and detailed the output.

NVIDIA researchers trained GET3D on synthetic data consisting of 2D images of 3D shapes captured from different camera angles. It took the team just two days to train the model on around 1 million images using NVIDIA A100 Tensor Core GPUs.

[…]

GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.

Once creators export GET3D-generated shapes to a graphics application, they can apply realistic lighting effects as the object moves or rotates in a scene. By incorporating another AI tool from NVIDIA Research, StyleGAN-NADA, developers can use text prompts to add a specific style to an image, such as modifying a rendered car to become a burned car or a taxi, or turning a regular house into a haunted one.

[…]

Source: NVIDIA AI Research Helps Populate Virtual Worlds With 3D Objects | NVIDIA Blog

DNA nets capture COVID-19 virus in low-cost rapid-testing platform


Tiny nets woven from DNA strands cover the spike proteins of the virus that causes COVID-19 and give off a glowing signal in this artist’s rendering. Credit: Xing Wang, University of Illinois

Tiny nets woven from DNA strands can ensnare the spike protein of the virus that causes COVID-19, lighting up the virus for a fast-yet-sensitive diagnostic test—and also impeding the virus from infecting cells, opening a new possible route to antiviral treatment, according to a new study.

Researchers at the University of Illinois Urbana-Champaign and collaborators demonstrated the DNA nets’ ability to detect and impede COVID-19 in human cell cultures in a paper published in the Journal of the American Chemical Society.

“This platform combines the sensitivity of PCR and the speed and low cost of antigen tests,” said study leader Xing Wang, a professor of bioengineering and of chemistry at Illinois. “We need tests like this for a couple of reasons. One is to prepare for the next pandemic. The other reason is to track ongoing viral epidemics—not only coronaviruses, but also other deadly and economically impactful viruses like HIV or influenza.”

DNA is best known for its genetic properties, but it also can be folded into custom nanoscale structures that can perform functions or specifically bind to other structures much like proteins do. The DNA nets the Illinois group developed were designed to bind to the coronavirus spike protein—the structure that sticks out from the surface of the virus and binds to receptors on to infect them. Once bound, the nets give off a fluorescent signal that can be read by an inexpensive handheld device in about 10 minutes.

The researchers demonstrated that their DNA nets effectively targeted the spike protein and were able to detect the virus at very low levels, equivalent to the sensitivity of gold-standard PCR tests that can take a day or more to return results from a clinical lab.

The technique holds several advantages, Wang said. It does not need any special preparation or equipment, and can be performed at , so all a user would do is mix the sample with the solution and read it. The researchers estimated in their study that the method would cost $1.26 per test.

“Another advantage of this measure is that we can detect the entire virus, which is still infectious, and distinguish it from fragments that may not be infectious anymore,” Wang said. This not only gives patients and physicians better understanding of whether they are infectious, but it could greatly improve community-level modeling and tracking of active outbreaks, such as through wastewater.

In addition, the DNA nets inhibited the virus’s spread in live cell cultures, with the antiviral activity increasing with the size of the DNA net scaffold. This points to DNA structures’ potential as therapeutic agents, Wang said.

“I had this idea at the very beginning of the pandemic to build a platform for testing, but also for inhibition at the same time,” Wang said. “Lots of other groups working on inhibitors are trying to wrap up the entire virus, or the parts of the virus that provide access to antibodies. This is not good, because you want the body to form antibodies. With the hollow DNA net structures, antibodies can still access the virus.”

The DNA net platform can be adapted to other viruses, Wang said, and even multiplexed so that a single test could detect multiple viruses.

“We’re trying to develop a unified technology that can be used as a plug-and-play platform. We want to take advantage of DNA sensors’ high binding affinity, low limit of detection, low cost and rapid preparation,” Wang said.

The paper is titled “Net-shaped DNA nanostructures designed for rapid/sensitive detection and potential inhibition of the SARS-CoV-2 .”


More information: Neha Chauhan et al, Net-Shaped DNA Nanostructures Designed for Rapid/Sensitive Detection and Potential Inhibition of the SARS-CoV-2 Virus, Journal of the American Chemical Society (2022). DOI: 10.1021/jacs.2c04835

Source: DNA nets capture COVID-19 virus in low-cost rapid-testing platform