Purported Chinese warships interfering with passenger planes

Australian airline Qantas issued standing orders to its pilots last week advising them that some of its fleet experienced interference on VHF stations from sources purporting to be the Chinese Military.

The Register has confirmed the reports.

The interference has been noticed in the western Pacific and South China Sea. Qantas has advised its crew to continue their assigned path and report interference to the controlling air traffic control authority.

The airline also has stated there have been no reported safety events.

Qantas_China_interference

Qantas operations order – Click to enlarge

Qantas’ warning follows a similar one from the International Federation of Air Line Pilots’ Associations (IFALPA) issued on March 2nd.

IFALPA said it “been made aware of some airlines and military aircraft being called over 121.50 or 123.45 by military warships in the Pacific region, notably South China Sea, Philippine Sea, East of Indian Ocean.” According to the org, some flights contacted by the warships were provided vectors to avoid the airspace.

But while interfering with VHF can be disruptive, what is more concerning is the IFALPA said it has “reason to believe there may be interferences to GNSS and RADALT as well.”

RADLT is aviation jargon for radar altimeter – an instrument that tells pilots how far they are above ground. So they can avoid hitting it. GNSS is the Global Navigation Satellite System.

GNSS Jamming navigation systems or radar altimeters can greatly disorientate a pilot or worse.

Of course, there is no telling if China is merely testing out its capabilities, performing these actions as a show of power, or has a deeper motive.

IFALPA recommended pilots who experience interference do not respond to warships, notify dispatchers and relevant air traffic control, and complete necessary reports.

China has asserted more control over Asia Pacific waters. Outgoing Micronesian president David Panuelo recently accused Beijing of sending warnings to stay away from its ships when entered his country’s territory. In an explosive letter, Panuelo said China also attempted to take control of the nation’s submarine cables and telecoms infrastructure.

Source: Purported Chinese warships interfering with passenger planes • The Register

RGB on your PC – OEM bloatware alternatives tested (with an ASUS)

RGB on your PC is cool, it’s beautiful and can be quite nuts but it’s also quite complex and trying to get it to do what you want it to isn’t always easy. This article is the result of many many reboots and much Googling.

I set up a PC with 2×3 Lian Li Unifan SL 120 (top and side), 2 Lian Li Strimmer cables (an ATX and a PCIe), a NZXT Kraken Z73 CPU cooler (with LED screen, but cooled by the Lian Li Unifan SL 120 on the side, not the NZXT fans that came with it), 2 RGB DDR5 DRAMs, an ASUS ROG Geforce 2070 RTX Super, a Asus ROG Strix G690-F Gaming wifi and a Corsair K95 RGB Keyboard.

Happy rainbow colours! It seems to default to this every time I change stuff

It’s no mean feat doing all the wiring on the fan controllers nowadays, and the instructions don’t make it much easier. Here is the wiring setup for this (excluding the keyboard)

The problem is that all of this hardware comes with it’s own bloated, janky software in order to get it to do stuff.

ASUS: Armory Crate / ASUS AURA

This thing takes up loads of memory and breaks often.

I decided to get rid of it once it had problems updating my drivers. You can still download Aura seperately (although there is a warning it will no longer be updated). To uninstall Armory Crate you can’t just uninstall everything from Add or Remove Programs, you need the uninstall tool, so it will also get rid of the scheduled tasks and a directory the windows uninstallers leave behind.

Once you install Aura seperately, it still takes an inane amount of processes, but you don’t actually need to run Aura to change the RGBs on the VGA and DRAM. Oddly enough not the motherboard itself though.

Just running AURA, not Armory Crate

You also can use other programs. Theoretically. That’s what the rest of this article is about. But in the end, I used Aura.

If you read on, it may be the case that I can’t get a lot of the other stuff to work because I don’t have Armory Crate installed. Nothing will work if I don’t have Aura installed, so I may as well use that.

Note: if you want to follow your driver updates, there’s a thread on the Republic of Gamers website that follows a whole load of them.

Problem I never solved: getting the Motherboard itself to show under Aura.

Corsiar: iCUE

Yup, this takes up memory, works pretty well, keeps updating for no apparent reason and I have to slide the switch left and right to get it to detect as a USB device quite often so the lighting works again. In terms of interface it’s quite easy to use.

Woohoo! all these processes for keyboard lighting!

It detects the motherboard and can monitor the motherboard, but can’t control the lighting on it. Once upon a time it did. Maybe this is because I’m not running the whole Armory Crate thing any more.

No idea.

Note: if you do put everything on in the dashboard, memory usage goes up to 500 MB

In fact, just having the iCUE screen open uses up ~200MB of memory.

It’s the most user friendly way of doing keyboard lighting effects though, so I keep it.

OpenRGB

This is the open source alternative that works on Windows and Linux. Yay! Gitlab page is here

When I first started running it, it told me I needed to run it as an administrator to get a driver working. I ran it and it hung my computer at device detection. Later on it started rebooting it. After installing the underlying Asus Aura services running it ran for me. [Note: the following is for the standard 0.8 build: Once. It reboots my PC after device detection now. Lots of people on Reddit have it working, maybe it needs the Aura Crate software. I have opened an issue, hopefully it will get fixed? According to a Reddit user, this could be because “If you have armoury crate installed, OpenRGB cannot detect your motherboard, if your ram is ddr5 [note: which mine is], you’ll gonna have to wait or download the latest pipeline version”]

OK, so the Pipeline build does work and even detects my motherboard! Unfortunately it doesn’t write the setting to the motherboard, so after a reboot it goes back to rainbow. After my second attempt the setting seems to have stuck and survived the reboot. However it still hangs the computer on a reboot (everything turns off except the PC itself) and It can take quite some time to open the interface. It also sometimes does and sometimes doesn’t detect the DRAM modules. Issue opened here

Even with the interace open, the memory footprint is tiny!

Note that it saves the settings to C:\Users\razor\AppData\Roaming\OpenRGB an you can find the logs there too.

SignalRGB

This looks quite good at first glance – it detected my devices and was able to apply effects to all of them at once. Awesome! Unfortunately it has a huge memory footprint (around 600MB!) and doesn’t write the settings to the devices, so if after a reboot you don’t run SignalRGB the hardware won’t show any lighting at all, they will all be turned off.

It comes in a free tier with mostly anything you need and a paid subscription tier, which costs $4,- per month = $48,- per year! Considering what this does and the price of most of these kind of one trick pony utils (one time fee ~ $20) this is incredibly high. On Reddit the developers are aggressive in saying they need to keep developing in order to support new hardware and if you think they are charging a lot of money for this you are nuts. Also, in order to download the free effects you need an account with them.

So nope, not using this.

JackNet RGBSync

Another Open Source RGB software, I got it to detect my keyboard and not much else. Development has stopped in 2020. The UI leaves a lot to be desired.

Gigabyte RGB Fusion

Googling alternatives to Aura, you will run into this one. It’s not compatible with my rig and doesn’t detect anything. Not really too surprising, considering my stuff is all their competitor, Asus.

L-Connect 2 and 3

For the Lian Li fans and the Strimmer cables I use L-Connect 2. It has a setting saying it should take over the motherboard setting, but this has stopped working. Maybe I need Armory Crate. It’s a bit clunky (to change settings you need to select which fans in the array you want to send an effect to and it always shows 4 arrays of 4 fans, which I don’t actually have), but it writes settings to the devices so you don’t need it running in the background.

L-Connect 3 runs extremely slowly. It’s not hung, it’s just incredibly slow. Don’t know why, but could be Armory Crate related.

NZXT CAM

This you need in the background or the LED screen on the Kraken will show the default: CPU temperature only. It takes a very long time to start up. It also requires quite a bit of memory to run, which is pretty bizarre if all you want to do is show a few animated GIFs on your CPU cooler in carousel mode

Interface up on the screen
Running in the background

So, it’s shit but you really really need it if you want the display on the CPU cooler to work.

Fan Control

So not really RGB, but related, is Fan Control for Windows

Also G-helper works for fan control and gpu switching

Conclusion

None of the alternatives really works very well for me. None of them can control the Lian-Li strimmer devices and most of them only control a few of them or have prohibitive licenses attached for what they are. What is more, in order to use the alternatives, you still need to install the ASUS motherboard driver, which is exactly what I had been hoping to avoid. OpenRGB shows the most promise but is still not quite there yet – but it does work for a lot of people, so hopefully this will work for you too. Good luck and prepare to reboot… A lot!

Qubits put new spin on magnetism: Boosting applications of quantum computers

[…] “With the help of a quantum annealer, we demonstrated a new way to pattern ,” said Alejandro Lopez-Bezanilla, a virtual experimentalist in the Theoretical Division at Los Alamos National Laboratory. Lopez-Bezanilla is the corresponding author of a paper about the research in Science Advances.

“We showed that a magnetic quasicrystal lattice can host states that go beyond the zero and one bit states of classical information technology,” Lopez-Bezanilla said. “By applying a to a finite set of spins, we can morph the magnetic landscape of a quasicrystal object.”

[…]

Lopez-Bezanilla selected 201 on the D-Wave computer and coupled them to each other to reproduce the shape of a Penrose quasicrystal.

Since Roger Penrose in the 1970s conceived the aperiodic structures named after him, no one had put a spin on each of their nodes to observe their behavior under the action of a magnetic field.

“I connected the qubits so all together they reproduced the geometry of one of his quasicrystals, the so-called P3,” Lopez-Bezanilla said. “To my surprise, I observed that applying specific external magnetic fields on the structure made some qubits exhibit both up and down orientations with the same probability, which leads the P3 to adopt a rich variety of magnetic shapes.”

Manipulating the interaction strength between qubits and the qubits with the external field causes the quasicrystals to settle into different magnetic arrangements, offering the prospect of encoding more than one bit of information in a single object.

Some of these configurations exhibit no precise ordering of the qubits’ orientation.

“This can play in our favor,” Lopez-Bezanilla said, “because they could potentially host a quantum quasiparticle of interest for .” A spin quasiparticle is able to carry information immune to external noise.

A quasiparticle is a convenient way to describe the collective behavior of a group of basic elements. Properties such as mass and charge can be ascribed to several spins moving as if they were one.

More information: Alejandro Lopez-Bezanilla, Field-induced magnetic phases in a qubit Penrose quasicrystal, Science Advances (2023). DOI: 10.1126/sciadv.adf6631. www.science.org/doi/10.1126/sciadv.adf6631

Source: Qubits put new spin on magnetism: Boosting applications of quantum computers

AI-generated art may be protected, says US Copyright Office – requires meaningful creative input from a human

[…]

AI software capable of automatically generating images or text from an input prompt or instruction has made it easier for people to churn out content. Correspondingly, the USCO has received an increasing number of applications to register copyright protections for material, especially artwork, created using such tools.

US law states that intellectual property can be copyrighted only if it was the product of human creativity, and the USCO only acknowledges work authored by humans at present. Machines and generative AI algorithms, therefore, cannot be authors, and their outputs are not copyrightable.

Digital art, poems, and books generated using tools like DALL-E, Stable Diffusion, Midjourney, ChatGPT, or even the newly released GPT-4 will not be protected by copyright if they were created by humans using only a text description or prompt, USCO director Shira Perlmutter warned.

“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” she wrote in a document outlining copyright guidelines.

“For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology – not the human user.

“Instead, these prompts function more like instructions to a commissioned artist – they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”

The USCO will consider content created using AI if a human author has crafted something beyond the machine’s direct output. A digital artwork that was formed from a prompt, and then edited further using Photoshop, for example, is more likely to be accepted by the office. The initial image created using AI would not be copyrightable, but the final product produced by the artist might be.

Thus it would appear the USCO is simply saying: yes, if you use an AI-powered application to help create something, you have a reasonable chance at applying for copyright, just as if you used non-AI software. If it’s purely machine-made from a prompt, you need to put some more human effort into it.

In a recent case, officials registered a copyright certificate for a graphic novel containing images created using Midjourney. The overall composition and words were protected by copyright since they were selected and arranged by a human, but the individual images themselves were not.

“In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form’. The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry,” the USCO declared.

Perlmutter urged people applying for copyright protection for any material generated using AI to state clearly how the software was used to create the content, and show which parts of the work were created by humans. If they fail to disclose this information accurately, or try to hide the fact it was generated by AI, USCO will cancel their certificate of registration and their work may not be protected by copyright law.

Source: AI-generated art may be protected, says US Copyright Office • The Register

So very slowly but surely the copyrighters are starting to understand what this newfangled AI technology is all about.

So what happens when an AI edits and AI generated artwork?

SCOPE Europe becomes the accredited monitoring body for a Dutch national data protection code of conduct

[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.

When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.

The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.

Source: PRESS RELEASE: SCOPE Europe becomes the accredited monitoring body for a Dutch national code of conduct: SCOPE Europe bvba/sprl

Civitai / stable diffusion

CivitAI is an AI image generator that isn’t hosted in the US, allowing for much more freedom of creation. It’s a really amazing system that gives Midjourney and DALL-E a run for their money.

Civitai is a platform that makes it easy for people to share and discover resources for creating AI art. Our users can upload and share custom models that they’ve trained using their own data, or browse and download models created by other users. These models can then be used with AI art software to generate unique works of art.

Cool, what’s a “Model?”

Put simply, a “model” refers to a machine learning algorithm or set of algorithms that have been trained to generate art or media in a particular style. This can include images, music, video, or other types of media.

To create a model for generating art, a dataset of examples in the desired style is first collected and used to train the model. The model is then able to generate new art by learning patterns and characteristics from the examples it was trained on. The resulting art is not an exact copy of any of the examples in the training dataset, but rather a new piece of art that is influenced by the style of the training examples.

Models can be trained to generate a wide range of styles, from photorealistic images to abstract patterns, and can be used to create art that is difficult or time-consuming for humans to produce manually.

Source: What the heck is Civitai? | Civita

AI-imager Midjourney v5 stuns with photorealistic images—and 5-fingered hands

On Wednesday, Midjourney announced version 5 of its commercial AI image-synthesis service, which can produce photorealistic images at a quality level that some AI art fans are calling creepy and “too perfect.” Midjourney v5 is available now as an alpha test for customers who subscribe to the Midjourney service, which is available through Discord.

“MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long,” said Julie Wieland, a graphic designer who often shares her Midjourney creations on Twitter. “Suddenly you see everything in 4k, it feels weirdly overwhelming but also amazing.”

[…]

Midjourney works similarly to image synthesizers like Stable Diffusion and DALL-E in that it generates images based on text descriptions called “prompts” using an AI model trained on millions of works of human-made art. Recently, Midjourney was at the heart of a copyright controversy regarding a comic book that used earlier versions of the service.

After experimenting with v5 for a day, Wieland noted improvements that include “incredibly realistic” skin textures and facial features; more realistic or cinematic lighting; better reflections, glares, and shadows; more expressive angles or overviews of a scene, and “eyes that are almost perfect and not wonky anymore.”

And, of course, the hands.

[…]

Midjourney works similarly to image synthesizers like Stable Diffusion and DALL-E in that it generates images based on text descriptions called “prompts” using an AI model trained on millions of works of human-made art. Recently, Midjourney was at the heart of a copyright controversy regarding a comic book that used earlier versions of the service.

After experimenting with v5 for a day, Wieland noted improvements that include “incredibly realistic” skin textures and facial features; more realistic or cinematic lighting; better reflections, glares, and shadows; more expressive angles or overviews of a scene, and “eyes that are almost perfect and not wonky anymore.”

And, of course, the hands.

[…]

Source: AI-imager Midjourney v5 stuns with photorealistic images—and 5-fingered hands | Ars Technica

Anker Eufy security cam ‘stored unique ID’ of everyone filmed in the cloud for other cameras to identify – and for anyone to watch

A lawsuit filed against eufy security cam maker Anker Tech claims the biz assigns “unique identifiers” to the faces of any person who walks in front of its devices – and then stores that data in the cloud, “essentially logging the locations of unsuspecting individuals” when they stroll past.

[…]

All three suits allege Anker falsely represented that its security cameras stored all data locally and did not upload that data to the cloud.

Moore went public with his claims in November last year, alleging video and audio captured by Anker’s eufy security cams could be streamed and watched by any stranger using VLC media player, […]

In a YouTube video, the complaint details, Moore allegedly showed how the “supposedly ‘private,’ ‘stored locally’, ‘transmitted only to you’ doorbell is streaming to the cloud – without cloud storage enabled.”

He claimed the devices were uploading video thumbnails and facial recognition data to Anker’s cloud server, despite his never opting into Anker’s cloud services and said he’d found a separate camera tied to a different account could identify his face with the same unique ID.

The security researcher alleged at the time this showed that Anker was not only storing facial-recog data in the cloud, but also “sharing that back-end information between accounts” lawyers for the two other, near-identical lawsuits claim.

[…]

According to the complaint [PDF], eufy’s security cameras are marketed as “private” and as “local storage only” as a direct alternative to Anker’s competitors that require the use of cloud storage.

Desai’s complaint goes on to claim:

Not only does Anker not keep consumers’ information private, it was further revealed that Anker was uploading facial recognition data and biometrics to its Amazon Web Services cloud without encryption.

In fact, Anker has been storing its customers’ data alongside a specific username and other identifiable information on its AWS cloud servers even when its “eufy” app reflects the data has been deleted. …. Further, even when using a different camera, different username, and even a different HomeBase to “store” the footage locally, Anker is still tagging and linking a user’s facial ID to their picture across its camera platform. Meaning, once recorded on one eufy Security Camera, those same individuals are recognized via their biometrics on other eufy Security Cameras.

In an unrelated incident in 2021, a “software bug” in some of the brand’s 1080p Wi-Fi-connected Eufycams cams sent feeds from some users’ homes to other Eufycam customers, some of whom were in other countries at the time.

[…]

Source: Eufy security cam ‘stored unique ID’ of everyone filmed • The Register