Video-Ident hacked by CCC

Services offering Video-Ident allow users to prove their identity to them by transmitting video showing themselves and an identity document for verification by an operator or by software. Once identified, individuals can proceed to sign up for cell phone contracts, create electronic signatures which are legally binding throughout the EU (QES), apply for credit and open bank accounts – or access their German personal health record (ePA).

A specially devised choreography designed to reveal circumstancial evidence such as visible security holograms or facial expressions is supposed to answer two critical questions in every Video-Ident session: Is the identity document genuine? Is the person in front of the camera genuine? Video-Ident service providers claim that their solutions reliably detect fraud attempts.

Open source software and a little watercolour

Martin Tschirsich, a security researcher with the CCC, demonstrates the failure to keep that promise in his report published today (all links refer to sources in German). In 2019 Tschirsich had already demonstrated how unauthorized individuals could acquire German medical insurance cards as well as special doctors’ and clinics’ electronic ID cards.

[…]

Links and further information

Source: CCC | Chaos Computer Club hacks Video-Ident

Stiff, achy knees? Lab-made cartilage gel outperforms the real thing

[…] Writing in the journal Advanced Functional Materials, a Duke University-led team says they have created the first gel-based cartilage substitute that is even stronger and more durable than the real thing.

Mechanical testing reveals that the Duke team’s hydrogel—a material made of water-absorbing polymers—can be pressed and pulled with more force than natural cartilage, and is three times more resistant to wear and tear.

[…]

To make this material, the Duke team took thin sheets of cellulose fibers and infused them with a polymer called —a viscous goo consisting of stringy chains of repeating molecules—to form a gel.

The act like the collagen fibers in natural cartilage, Wiley said—they give the gel strength when stretched. The polyvinyl alcohol helps it return to its original shape. The result is a Jello-like material, 60% water, which is supple yet surprisingly strong.

Natural cartilage can withstand a whopping 5,800 to 8,500 pounds per inch of tugging and squishing, respectively, before reaching its breaking point. Their lab-made version is the first hydrogel that can handle even more. It is 26% stronger than natural cartilage in tension, something like suspending seven grand pianos from a key ring, and 66% stronger in compression—which would be like parking a car on a postage stamp.

[…]

In the past, researchers attempting to create stronger hydrogels used a freeze-thaw process to produce crystals within the gel, which drive out water and help hold the polymer chains together. In the new study, instead of freezing and thawing the hydrogel, the researchers used a heat treatment called annealing to coax even more crystals to form within the polymer network.

By increasing the crystal content, the researchers were able to produce a gel that can withstand five times as much stress from pulling and nearly twice as much squeezing relative to freeze-thaw methods.

The improved strength of the annealed gel also helped solve a second design challenge: securing it to the joint and getting it to stay put.

Cartilage forms a thin layer that covers the ends of bones so they don’t grind against one another. Previous studies haven’t been able to attach hydrogels directly to bone or cartilage with sufficient strength to keep them from breaking loose or sliding off. So the Duke team came up with a different approach.

Their method of attachment involves cementing and clamping the hydrogel to a titanium base. This is then pressed and anchored into a hole where the damaged cartilage used to be. Tests show the design stays fastened 68% more firmly than natural cartilage on bone.

[…]

In wear tests, the researchers took artificial cartilage and natural cartilage and spun them against each other a million times, with a pressure similar to what the knee experiences during walking. Using a high-resolution X-ray scanning technique called micro-computed tomography (micro-CT), the scientists found that the surface of their lab-made version held up three times better than the real thing. Yet because the mimics the smooth, slippery, cushiony nature of real cartilage, it protects other joint surfaces from friction as they slide against the implant.

[…]

From the lab, the first cartilage-mimicking gel that’s strong enough for knees

More information: Jiacheng Zhao et al, A Synthetic Hydrogel Composite with a Strength and Wear Resistance Greater than Cartilage, Advanced Functional Materials (2022). DOI: 10.1002/adfm.202205662

Journal information: Advanced Functional Materials

Source: Stiff, achy knees? Lab-made cartilage gel outperforms the real thing

A new method boosts wind farms’ energy output, without new equipment

Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

The increase in energy output from a given installation may seem modest—it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new , or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

[…]

“Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them—a factor that individual -control systems do not currently take into account.

[…]

a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

[…]

In a months-long experiment in a real utility-scale wind farm in India, the was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

[…]

Source: A new method boosts wind farms’ energy output, without new equipment

Hubble sees supergiant Betelgeuse slowly recovering after blowing its top

Following the titanic mass ejection of a large piece of its visible surface. The escaping material cooled to form a cloud of dust that temporarily made the star look dimmer, as seen from Earth. This unprecedented stellar convulsion disrupted the monster star’s 400-day-long oscillation period that astronomers had measured for more than 200 years. The interior may now be jiggling like a plate of gelatin dessert. Credit: NASA, ESA, Elizabeth Wheatley (STScI)

Analyzing data from NASA’s Hubble Space Telescope and several other observatories, astronomers have concluded that the bright red supergiant star Betelgeuse quite literally blew its top in 2019, losing a substantial part of its visible surface and producing a gigantic Surface Mass Ejection (SME). This is something never before seen in a normal star’s behavior.

The sun routinely blows off parts of its tenuous outer atmosphere, the corona, in an event known as a Coronal Mass Ejection (CME). But the Betelgeuse SME blasted off 400 billion times as much mass as a typical CME.

The monster star is still slowly recovering from this catastrophic upheaval. “Betelgeuse continues doing some very unusual things right now; the interior is sort of bouncing,” says Andrea Dupree of the Center for Astrophysics | Harvard & Smithsonian.

These new observations yield clues as to how red stars lose mass late in their lives as their nuclear fusion furnaces burn out, before exploding as supernovae. The amount of mass loss significantly affects their fate. However, Betelgeuse’s surprisingly petulant behavior is not evidence the star is about to blow up anytime soon. So the mass loss event is not necessarily the signal of an imminent explosion

[…]

The titanic outburst in 2019 was possibly caused by a convective plume, more than a million miles across, bubbling up from deep inside the star. It produced shocks and pulsations that blasted off the chunk of the photosphere leaving the star with a large cool surface area under the dust cloud that was produced by the cooling piece of photosphere. Betelgeuse is now struggling to recover from this injury.

Weighing roughly several times as much as our moon, the fractured piece of photosphere sped off into space and cooled to form a that blocked light from the star as seen by Earth observers. The dimming, which began in late 2019 and lasted for a few months, was easily noticeable even by backyard observers watching the star change brightness. One of the brightest stars in the sky, Betelgeuse is easily found in the right shoulder of the constellation Orion.

Even more fantastic, the supergiant’s 400-day pulsation rate is now gone, perhaps at least temporarily. For almost 200 years astronomers have measured this rhythm as evident in changes in Betelgeuse’s brightness variations and surface motions. Its disruption attests to the ferocity of the blowout.

[…]

Betelgeuse is now so huge now that if it replaced the sun at the center of our solar system, its outer surface would extend past the orbit of Jupiter. Dupree used Hubble to resolve hot spots on the star’s in 1996. This was the first direct image of a star other than the sun.

[…]

Source: Hubble sees supergiant Betelgeuse slowly recovering after blowing its top

Researchers find way to shrink a 3D holographic VR headset down to normal glasses size using pancake lenses and a waveguide

Researchers from Stanford University and Nvidia have teamed up to help develop VR glasses that look a lot more like regular spectacles. Okay, they are rather silly looking due to the ribbons extended from either eye, but they’re much, much flatter and compact than your usual goggle-like virtual reality headsets today.

“A major barrier to widespread adoption of VR technology, however, is the bulky form factor of existing VR displays and the discomfort associated with that,” the research paper published at Siggraph 2022 (opens in new tab) says.

These aptly named “Holographic Glasses” can deliver a full-colour 3D holographic image using optics that are only 2.5mm thick. Compared to the traditional way a VR headset works, in which a lens magnifies a smaller display some distance away from it, shrinking all the prerequisite parts down to such a small size is quite the spectacular step forward for VR.

The Holographic Glasses prototype uses pancake lenses, which is a concept that has been thrown around a couple of times in the past few years. These pancake lenses not only allow for a much smaller profile but reportedly they have a few other benefits, too:  the resolution they can offer is said to be unlimited, meaning you can crank up the resolution for VR headsets, and they offer a much wider field of view at up to 200°.

[…]

The research paper lists the glasses as such: “a coherent light source that is coupled into a pupil-replicating waveguide, which provides the illumination for a phase-only SLM that is mounted on the waveguide in front of the user’s eye. This SLM creates a small image behind the device, which is magnified by a thin geometric phase (GP) lens.”

[…]

(Image credit: Nvidia, Stanford University)

 

the final result is a very small VR device that could be game-changing if made a reality outside of the lab. It also only weighs 60g, which is notably far lighter than even the Meta Quest 2 (opens in new tab), which rolls in at 503g.

[…]

You can read up on the whole project in the recently published research paper titled “Holographic Glasses for Virtual Reality (opens in new tab)” by Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Wetzstein.

[…]

Source: Researchers find way to shrink a VR headset down to normal glasses size | PC Gamer

Open Cybersecurity Schema Framework released

The Open Cybersecurity Schema Framework is an open-source project, delivering an extensible framework for developing schemas, along with a vendor-agnostic core security schema. Vendors and other data producers can adopt and extend the schema for their specific domains. Data engineers can map differing schemas to help security teams simplify data ingestion and normalization, so that data scientists and analysts can work with a common language for threat detection and investigation. The goal is to provide an open standard, adopted in any environment, application, or solution, while complementing existing security standards and processes.

OVERVIEW

The framework is made up of a set of data types, an attribute dictionary, and the taxonomy. It is not restricted to the cybersecurity domain nor to events, however the initial focus of the framework has been a schema for cybersecurity events. OCSF is agnostic to storage format, data collection and ETL processes. The core schema for cybersecurity events is intended to be agnostic to implementations. The schema framework definition files and the resulting normative schema are written as JSON.

Refer to the white paper Understanding the Open Cybersecurity Schema Framework for an introduction to the framework and schema. A schema browser for the cybersecurity schema can be found at OCSF Schema, where the user can easily navigate the schema, apply profiles and extensions, and browse the attributes, objects and event classes.

Source: Github / ocsf

Still a lot of work to be done in the schema but it’s a start

Math error: A new study overturns 100-year-old understanding of color perception

A new study corrects an important error in the 3D mathematical space developed by the Nobel Prize-winning physicist Erwin Schrödinger and others, and used by scientists and industry for more than 100 years to describe how your eye distinguishes one color from another. The research has the potential to boost scientific data visualizations, improve TVs and recalibrate the textile and paint industries.

[…]

“Our research shows that the current mathematical model of how the eye perceives color differences is incorrect. That model was suggested by Bernhard Riemann and developed by Hermann von Helmholtz and Erwin Schrödinger—all giants in mathematics and physics—and proving one of them wrong is pretty much the dream of a scientist,” said Bujack.

[…]

the team was surprised when they discovered they were the first to determine that the longstanding application of Riemannian geometry, which allows generalizing straight lines to curved surfaces, didn’t work.

This visualization captures the 3D mathematical space used to map human color perception. A new mathematical representation has found that the line segments representing the distance between widely separated colors don’t add up correctly using the previously accepted geometry. The research contradicts long-held assumptions and will improve a variety of practical applications of color theory. Credit: Los Alamos National Laboratory

To create industry standards, a precise mathematical model of perceived is needed. First attempts used Euclidean spaces—the familiar geometry taught in many high schools; more advanced models used Riemannian geometry. The models plot red, green and blue in the 3D space. Those are the colors registered most strongly by light-detecting cones on our retinas, and—not surprisingly—the colors that blend to create all the images on your RGB computer screen.

In the study, which blends psychology, biology and mathematics, Bujack and her colleagues discovered that using Riemannian geometry overestimates the perception of large color differences. That’s because people perceive a big difference in color to be less than the sum you would get if you added up small differences in color that lie between two widely separated shades.

Riemannian geometry cannot account for this effect.

“We didn’t expect this, and we don’t know the exact of this new space yet,” Bujack said. “We might be able to think of it normally but with an added dampening or weighing function that pulls long distances in, making them shorter. But we can’t prove it yet.”

Source: Math error: A new study overturns 100-year-old understanding of color perception

More information: Roxana Bujack et al, The non-Riemannian nature of perceptual color space, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2119753119

AI ethics: we haven’t thought about including non-human animals

[…] The ethical implications of AI have sparked concern from governments, the public, and even companies.Footnote 1 According to some meta-studies on AI ethics guidelines, the most frequently discussed themes include fairness, privacy, accountability, transparency, and robustness [1,2,3]. Less commonly broached, but not entirely absent, are issues relating to the rights of potentially sentient or autonomous forms of AI [4, 5]. One much more significant, and more immediately present, issue has, however, been almost entirely neglected: AI’s impact on non-human animals.Footnote 2 There have, we acknowledge, been discussions of AI in connection with endangered species and ecosystems,Footnote 3 but we are referring to questions relating to AI’s impact on individual animals. As we will show in more detail below, many AI systems have significant impacts on animals, with the total number of animals affected annually likely to reach the tens or even hundreds of billions. We therefore argue that AI ethics needs to broaden its scope in order to deal with the ethical implications of this very large-scale impact on sentient, or possibly sentient, beings.

[…]

The structure of the paper forms a series of step-by-step arguments, leading to the conclusion that there needs to be AI ethics concerning animals.

  1. 1. Animals matter morally, at least to some degree (Sect. 2).
  2. 2. AI systems do in fact impact animals.
  3. 3. These impacts are huge in scale and severe in intensity, and therefore important. (Sect. 3.2).
  4. 4. Conclusion: AI ethics needs to include consideration of impact of AI on animals

[…]

it is reasonable to claim that having the capacity to experience pain and pleasure is sufficient to give a being moral status [14,15,16].Footnote 4The capacity to experience pain and pleasure is not, of course, sufficient for moral agency, but it is sufficient to make it wrong to do certain things to the being. This is now recognized in the increasing tendency of many countries to pass legislation granting animals the status of “sentient being,” a position between that of a person and that of a thing.Footnote 5

[…]

we need to distinguish three ways in which AI systems can impact animals: because they are designed to interact with animals; because they unintentionally (that is, without the designers’ intent) interact with animals; and because they impact animals indirectly without interacting with animals at all.

[…]

Of the hundreds of AI ethics relatedFootnote 31 papers we reviewed in this project, we only found four that concern the impacts of AI on animals, in a general way,Footnote 32 and discuss the relevant ethical implications.

[…]

These four papers have, in our opinion, quite different focuses than ours. We differ from these authors by discussing in greater detail how AI affects the lives of animals and especially the negative impact, or in other words the suffering AI might cause animals. As far as we are aware, this is the first paper to argue for the general principle that animals, because of their capacity to suffer or enjoy their lives, should be part of the concern of AI ethics.Footnote 34

We aim to supplement these four papers by providing the following additional elements:

  • An analysis of the ethical implications of AI’s impact on animals.
  • A sample analysis of the philosophical issues that will need to be considered if the scope of AI ethics is extended to animals.
  • A sample analysis of the philosophical issues that will need to be considered if we want AI systems to make ethically sound decisions in relation to animals.
  • A defense of the claim that the field of AI ethics is obliged to actively deal with the ethical issues of AI’s impact on animals.

[…]

 

Source: AI ethics: the case for including animals | SpringerLink