Nordic Air Defense Pact Combines Forces Of Hundreds Of Fighter Aircraft

To better cope with threats emanating from Russia, the countries of Denmark, Finland, Norway and Sweden have created a unified Nordic air defense alliance, pooling the resources of their air forces. They have upwards of 300 fighter jets between them as well as training, transport and surveillance fixed-wing aircraft and helicopters.

Those four nations on Friday announced they signed the first Nordic Air Commanders’ Intent last week during a meeting at Ramstein Air Base in Germany.

“The declaration of intent strengthens Nordic cooperation and paves the way for further strengthening of the Nordic air forces,” the four nations said Friday in a joint statement. “The ultimate goal is to be able to operate seamlessly together as one force by developing a Nordic concept for joint air operations based on already-known NATO methodology.”

To achieve that goal, this intent directs the development of a “Nordic Warfighting Concept for Joint Air Operations,” pursuing four lines of effort:

  • integrated command and control, operational planning and execution
  • flexible and resilient deployment of our air forces
  • joint airspace surveillance
  • joint education, training and exercises.

The publicly released plan does not provide specific timelines for achieving any of the goals. However, a separate jointly released document gives an overview.

Finnish Air Force F-18 Hornet. (Finnish Air Force photo)

“In the medium term, efforts shall revolve around preparing for, conducting, and assessing Nordic Response 24 from an air perspective, putting emphasis on the Nordic digital and semi-distributed [Air Operations Center] AOC development steps,” according to that document. “On the horizon, long-term permanent solutions to fulfill this intent’s aim shall be determined and established.”

While none of the documents mention Russia, the move to integrate the air forces was triggered by Moscow’s full-on invasion of Ukraine, the commander of the Danish Air Force, Major General Jan Dam, told Reuters.

“Our combined fleet can be compared to a large European country,” Dam said.

Norway has at least 52 F-35 Lightning II Joint Strike Fighters, according to Janes. The Norwegian Air Force says it is phased out its F-16 fleet.

Norway flies F-35 Lightning II Joint Strike Fighters. (Norwegian Air Force photo)

Finland has 62 F/A-18C/D multirole fighter jets and 64 F-35s on order, according to Reuters.

Finland has more than 60 Hornets. (Finnish Air Force photo)

Finnish Defense Minister Antti Kaikkonen on Thursday expressed his opposition to a request by Ukraine for a portion of his country’s Hornet fleet.

“My view as Finland’s defense minister is that we need these Hornets to secure our own country,” Kaikkonen told a news conference in Helsinki, as reported by Reuters. “I view negatively the idea that they would be donated during the next few years. And if we look even further, my understanding is that they begin to be worn out and will have little use value left, he added.”

Denmark has 58 F-16s and 27 F-35s on order, according to Reuters.

Danish F-16 taxiing ready for a training mission alongside Allies in the Baltic Sea region, helping improve tactics and readiness. (Danish Air Force photo).

Sweden has around 70 JAS-39C/D Gripen jets and will be converting over to the enhanced Gipen-E in the coming years.

Swedish JAS-38 Gripen jets. (Swedish Air Force photo)

How soon this gets off the ground and exactly how it will works remains to be seen. And while all four nations have agreed to work within NATO frameworks, Finland and Sweden have yet to gain membership.

[…]

Source: Nordic Air Defense Pact Combines Forces Of Hundreds Of Fighter Aircraft

13-Sided Shape That never repeats discovered

Computer scientists found the holy grail of tiles. They call it the “einstein,” one shape that alone can cover a plane without ever repeating a pattern.

And all it takes for this special shape is 13 sides.

In the world of mathematics, an “aperiodic monotile”—also known as an einstein based off a German phrase for one stone—is a shape that can tile a plane, but never repeat.

“In this paper we present the first true aperiodic monotile, a shape that forces aperiodicity through geometry alone, with no additional constrains applied via matching conditions,” writes Craig Kaplan, a computer science professor from the University of Waterloo and one of the four authors of the paper. “We prove that this shape, a polykite that we call ‘the hat,’ must assemble into tilings based on a substitution system.”

[…]

The history of the aperiodic tile has never had a breakthrough like this one. The first aperiodic sets had over 20,000 tiles, Kaplan tweets. “Subsequent research lowered that number, to sets of size 92, then six, and then two in the form of the famous Penrose tiles.” But those Penrose tiles were from 1974.

[…]

The team proved the nature of the shape through computer coding, and in a fascinating aside, the shape doesn’t lose its aperiodic nature even when the length of sides changes.

“We finally,” Kaplan says, “got down to one!”

It’s time for that bathroom remodel.

Source: Researchers Discovered a New 13-Sided Shape

Gen-2 by Runway text to Video AI

No lights. No camera. All action.Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.

Visit the page for examples

Source: Gen-2 by Runway

Runway also provided Stable Diffusion, the picture generator

GitHub Copilot now integrates way better into Visual Studio (?=.* Code)

[…] Introduced last summer after a year-long technical trial, Copilot offers coding suggestions, though not always good ones, to developers using GitHub with supported text editors and IDEs, like Visual Studio Code.

As of last month, according to GitHub, Copilot had a hand in 46 percent of the code being created on Microsoft’s cloud repo depot and had helped developers program up to 55 percent faster.

On Wednesday, Copilot – an AI “pair programmer”, as GitHub puts it – will be ready to converse with developers ChatGPT-style in either Visual Studio Code or Visual Studio. Prompt-and-response conversations take place in an IDE sidebar chat window, as opposed to the autocompletion responses that get generated from comment-based queries in a source file.

“Copilot chat is not just a chat window,” said Dohmke. “It recognizes what code a developer has typed, what error messages are shown, and it’s deeply embedded into the IDE.”

A developer thus can highlight, say, a regex in a source file and invite Copilot to explain what the obtuse pattern matching expression does. Copilot can also be asked to generate tests, to analyze and debug, to propose a fix, or to attempt a custom task. The model can even add comments that explain source code and can clean files up like a linter.

More interesting still, Copilot can be addressed by voice. Using spoken prompts, the assistive software can produce (or reproduce) code and run it on demand. It’s a worthy accessibility option at least.

[…]

When making a pull request under the watchful eye of AI, developers can expect to find GitHub’s model will fill out tags that serve to provide additional information about what’s going on. It then falls to developers to accept or revise the suggestions.

[…]

What’s more, Copilot’s ambit has been extended to documentation. Starting with documentation for React, Azure Docs, and MDN, developers can pose questions and get AI-generated answers through a chat interface. In time, according to Dohmke, the ability to interact with documentation via a chat interface will be extended to any organization’s repositories and internal documentation.

[…]

GitHub has even helped Copilot colonize the command line, with GitHub Copilot CLI. If you’ve ever forgotten an obscure command line incantation or command flag, Copilot has you covered

[…]

Source: GitHub Copilot has some new tricks up its sleeve • The Register

EU right to repair law could see fixes for up to 10 years for more goods, still offers ways out though

The European Commission has adopted a new set of right to repair rules that, among other things, will add electronic devices like smartphones and tablets to a list of goods that must be built with repairability in mind.

The new rules [PDF] will need to be need to be negotiated between the European Parliament and member states before they can be turned into law. If they are, a lot more than just repairability requirements will change.

One provision will require companies selling consumer goods in the EU to offer repairs (as opposed to just replacing a damaged device) free of charge within a legal guarantee period unless it would be cheaper to replace a damaged item.

Note: so any company can get out of it quite easily.

Beyond that, the directive also adds a set of rights for device repairability outside of legal guarantee periods that the EC said will help make repair a better option than simply tossing a damaged product away.

Under the new post-guarantee period rule, companies that produce goods the EU defines as subject to repairability requirements (eg, appliances, commercial computer hardware, and soon cellphones and tablets) are obliged to repair such items for five to 10 years after purchase if a customer demands so, and the repair is possible.

[…]

The post-guarantee period repair rule also establishes the creation of an online “repair matchmaking platform” for EU consumers, and calls for the creation of a European repair standard that will “help consumers identify repairers who commit to a higher quality.”

[…]

New rules don’t do enough, say right to repair advocates

The Right to Repair coalition said in a statement that, while it welcomes the step forward taken by the EU’s new repairability rules, “the opportunity to make the right to repair universal is missed.”

While the EC’s rules focus on cutting down on waste by making products more easily repairable, they don’t do anything to address repair affordability or anti-repair practices, R2R said. Spare parts and repair charges, the group argues, could still be exorbitantly priced and inaccessible to the average consumer.

[…]

Ganapini said that truly universal right to repair laws would include assurances that independent providers were available to conduct repairs, and that components, manuals and diagnostic tools would be affordably priced. She also said that, even with the addition of smartphones and tablets to repairability requirements, the products it applies to is still too narrow.

[…]

Source: EU right to repair law could see fixes for up to 10 years • The Register

“Click-to-cancel” rule would penalize companies that make you cancel by phone

Canceling a subscription should be just as easy as signing up for the service, the Federal Trade Commission said in a proposed “click-to-cancel” rule announced today. If approved, the plan “would put an end to companies requiring you to call customer service to cancel an account that you opened on their website,” FTC commissioners said.

[…]

The FTC said the proposed rule would be enforced with civil penalties and let the commission return money to harmed consumers.

“The proposal states that if consumers can sign up for subscriptions online, they should be able to cancel online, with the same number of steps. If consumers can open an account over the phone, they should be able to cancel it over the phone, without endless delays,” FTC Chair Lina Khan wrote.

[…]

Source: “Click-to-cancel” rule would penalize companies that make you cancel by phone | Ars Technica

We need this globally!

Is a penguin heavy? New study explores why we disagree so often

Is a dog more similar to a chicken or an eagle? Is a penguin noisy? Is a whale friendly?

Psychologists at the University of California, Berkeley, say these absurd-sounding questions might help us better understand what’s at the heart of some of society’s most vexing arguments.

Research published online Thursday in the journal Open Mind shows that our concepts about and associations with even the most basic words vary widely. At the same time, people tend to significantly overestimate how many others hold the same conceptual beliefs — the mental groupings we create as shortcuts for understanding similar objects, words or events.

It’s a mismatch that researchers say gets at the heart of the most heated debates, from the courtroom to the dinner table.

“The results offer an explanation for why people talk past each other,” said Celeste Kidd, an assistant professor of psychology at UC Berkeley and the study’s principal investigator. “When people are disagreeing, it may not always be about what they think it is. It could be stemming from something as simple as their concepts not being aligned.”

Simple questions like, “What do you mean?” can go a long way in preventing a dispute from going off the rails, Kidd said. In other words, she said, “Just hash it out.”

[…]

But measuring just how much those concepts vary is a long-standing mystery.

To help understand it a bit better, Kidd’s team recruited more than 2,700 participants for a two-phase project. Participants in the first phase were divided in half and asked to make similarity judgements about whether one animal — a finch, for example — was more similar to one of two other animals, like a whale or a penguin. The other half were asked to make similarity judgments about U.S. politicians, including George W. Bush, Donald Trump, Hillary Clinton and Joe Biden. The researchers chose those two categories because people are more likely to view common animals similarly; they’d have more shared concepts. Politicians, on the other hand, might generate more variability, since people have distinct political beliefs.

But they found significant variability in how people conceptualized even basic animals.

Take penguins. The probability that two people selected at random will share the same concept about penguins is around 12%, Kidd said. That’s because people are disagreeing about whether penguins are heavy, presumably because they haven’t lifted a penguin.

“If people’s concepts are totally aligned, then all of those similarity judgments should be the same,” Kidd said. “If there’s variability in those judgments, that tells us that there’s something compositionally that’s different.”

Researchers also asked participants to guess what percentage of people would agree with their individual responses. Participants tended to believe — often incorrectly — that roughly two-thirds of the population would agree with them. In some examples, participants believed they were in the majority, even when essentially nobody else agreed with them.

It’s a finding befitting of a society of people convinced they’re right, when they’re actually wrong.

Overall, two people picked at random during the study timeframe of 2019-2021 were just as likely to have agreed as disagreed with their answers. And, perhaps unsurprisingly in a polarized society, political words were far less likely to have a single meaning — there was more disagreement — than animal words.

[…]

In a second phase of the project, participants listed 10 single-word adjectives to describe the animals and the politicians. Participants then rated the animals’ and politicians’ features — “Is a finch smart?” was an example of a question they were asked.

Again, researchers found that people differed radically in how they defined basic concepts, like about animals. Most agreed that seals are not feathered, but are slippery. However, they disagreed about whether seals are graceful. And while most people were in agreement that Trump is not humble and is rich, there was significant disagreement about whether he is interesting.

This research is significant, Kidd said, because it further shows how most people we meet will not have the exact same concept of ostensibly clear-cut things, like animals. Their concepts might actually be radically different from each other. The research transcends semantic arguments, too. It could help track how public perceptions of major public policies evolve over time and whether there’s more alignment in concepts or less.

“When people are disagreeing, it may not always be about what they think it is,” Kidd said. “It could be stemming from something as simple as their concepts not being aligned.”

Source: I say dog, you say chicken? New study explores why we disagree so often | Berkeley News

Planting Undetectable Backdoors in Machine Learning Models

[…]

We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.•First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input—a property we call non-replicability.•Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

[…]

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Source: Planting Undetectable Backdoors in Machine Learning Models : [Extended Abstract] | IEEE Conference Publication | IEEE Xplore

Whistleblowers Take Note: Don’t Trust Cropping Tools – you can often uncrop them

[…] It is, in fact, possible to uncrop images and documents across a variety of work-related computer apps. Among the suites that include the ability are Google Workspace, Microsoft Office, and Adobe Acrobat.

Being able to uncrop images and documents poses risks for sources who may be under the impression that cropped materials don’t contain the original uncropped content.

One of the hazards lies in the fact that, for some of the programs, downstream crop reversals are possible for viewers or readers of the document, not just the file’s creators or editors. Official instruction manuals, help pages, and promotional materials may mention that cropping is reversible, but this documentation at times fails to note that these operations are reversible by any viewers of a given image or document.

For instance, while Google’s help page mentions that a cropped image may be reset to its original form, the instructions are addressed to the document owner. “If you want to undo the changes you’ve made to your photo,” the help page says, “reset an image back to its original photo.” The page doesn’t specify that if a reader is viewing a Google Doc someone else created and wants to undo the changes the editor made to a photo, the reader, too, can reset the image without having edit permissions for the document.

For users with viewer-only access permissions, right-clicking on an image doesn’t yield the option to “reset image.” In this situation, however, all one has to do is right-click on the image, select copy, and then paste the image into a new Google Doc. Right-clicking the pasted image in the new document will allow the reader to select “reset image.” (I’ve put together an example to show how the crop reversal works in this case.)

[…]

Uncropped versions of images can be preserved not just in Office apps, but also in a file’s own metadata. A photograph taken with a modern digital camera contains all types of metadata. Many image files record text-based metadata such as the camera make and model or the GPS coordinates at which the image was captured. Some photos also include binary data such as a thumbnail version of the original photo that may persist in the file’s metadata even after the photo has been edited in an image editor.

Images and photos are not the only digital files susceptible to uncropping: Some digital documents may also be uncropped. While Adobe Acrobat has a page-cropping tool, the instructions point out that “information is merely hidden, not discarded.” By manually setting the margins to zero, it is possible to restore previously cropped areas in a PDF file.

[…]

Images and documents should be thoroughly stripped of metadata using tools such as ExifTool and Dangerzone. Additionally, sensitive materials should not be edited through online tools, as the potential always exists for original copies of the uploaded materials to be preserved and revealed.

[…]

 

Source: Whistleblowers Take Note: Don’t Trust Cropping Tools

Amazon Warns Staff Not to Share Confidential Information With ChatGPT

[…]

Soon, an Amazon corporate lawyer chimed in. She warned employees not to provide ChatGPT with “any Amazon confidential information (including Amazon code you are working on),” according to a screenshot of the message seen by Insider.

The attorney, a senior corporate counsel at Amazon, suggested employees follow the company’s existing conflict of interest and confidentiality policies because there have been “instances” of ChatGPT responses looking similar to internal Amazon data.

“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote.

[…]

“OpenAI is far from transparent about how they use the data, but if it’s being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?” said Emily Bender, who teaches computational linguistics at University of Washington.

[…]

some Amazonians are already using the AI tool as a software “coding assistant” by asking it to improve internal lines of code, according to Slack messages seen by Insider.

[…]

For Amazon employees, data privacy seems to be the least of their concerns. They said using the chatbot at work has led to “10x in productivity,” and many expressed a desire to join internal teams developing similar services.

[…]

Source: Amazon Warns Staff Not to Share Confidential Information With ChatGPT

A persistent influence of supernovae on biodiversity

The number of exploding stars (supernovae) has significantly influenced marine life’s biodiversity during the last 500 million years. This is the essence of a new study published in Ecology and Evolution by Henrik Svensmark, DTU space.

 

Extensive studies of the fossil record have shown that the diversity of life forms has varied significantly over , and a fundamental question of evolutionary biology is which processes are responsible for these variations.

The new study reveals a major surprise: The varying number of nearby exploding stars (supernovae) closely follows changes in marine genera (the taxonomic rank above species) biodiversity during the last 500 million years. The agreement appears after normalizing the marine diversity curve by the changes in shallow marine areas along the continental coasts.

Shallow marine shelves are relevant since most lives in these areas, and changes in shelf areas open new regions where species can evolve. Therefore, changes in available shallow areas influence biodiversity.

“A possible explanation for the supernova-diversity link is that supernovae influence Earth’s climate,” says Henrik Svensmark, author of the paper and senior researcher at DTU Space.

“A high number of supernovae leads to a with a large temperature difference between the equator and polar regions. This results in stronger winds, ocean mixing, and transportation of life-essential nutrients to the along the continental shelves.”

Variations in relative supernova history (black curve) compared with genera-level diversity curves normalized with the area of shallow marine margins (shallow areas along the coasts). The brown and light green curves are major marine animals’ genera-level diversity. The orange is marine invertebrate genera-level diversity. Finally, the dark green curve is all marine animals’ genera-level diversity. Abbreviations for geological periods are Cm Cambrian, O Ordovician, S Silurian, D Devonian, C Carboniferous, P Permian, Tr Triassic, J Jurassic, K Cretaceous, Pg Palaeogene, Ng Neogene. Credit: Henrik Svensmark, DTU Space

The paper concludes that supernovae are vital for primary bioproductivity by influencing the transport of nutrients. Gross primary bioproductivity provides energy to the , and speculations have suggested that changes in bioproductivity may influence biodiversity. The present results are in agreement with this hypothesis.

“The new evidence points to a connection between life on Earth and supernovae, mediated by the effect of cosmic rays on clouds and climate,” says Henrik Svensmark.

When heavy stars explode, they produce cosmic rays, which are elementary particles with enormous energies. Cosmic rays travel to our solar system, where some end their journey by colliding with Earth’s atmosphere. Previous studies by Henrik Svensmark and colleagues referenced below show that they become the primary source of ions help form and grow aerosols required in cloud formation.

Since clouds can regulate the solar energy reaching Earth’s surface, the cosmic-ray-aerosol-cloud influences climate. Evidence shows substantial climate shifts when the intensity of changes by several hundred percent over millions of years.

More information: Henrik Svensmark, A persistent influence of supernovae on biodiversity over the Phanerozoic, Ecology and Evolution (2023). DOI: 10.1002/ece3.9898

Henrik Svensmark, Supernova Rates and Burial of Organic Matter, Geophysical Research Letters (2022). DOI: 10.1029/2021GL096376

Svensmark, H. and Friis-Christensen, E., Variation of Cosmic Ray Flux and Global Cloud Coverage -A missing Link in Solar-Climate Relationships, Journal of Atmospheric and Terrestrial Physics, 59, 1225, (1997)

Nir J. Shaviv et al, The Phanerozoic climate, Annals of the New York Academy of Sciences (2022). DOI: 10.1111/nyas.14920

Henrik Svensmark, Evidence of nearby supernovae affecting life on Earth, Monthly Notices of the Royal Astronomical Society (2012). DOI: 10.1111/j.1365-2966.2012.20953.x

Source: A persistent influence of supernovae on biodiversity

Ubisoft Ghostwriter: AI to write NPC dialogue

[…] As games grow bigger in scope, writers are facing the ratcheting challenge of keeping NPCs individually interesting and realistic. How do you keep each interaction with them – especially if there are hundreds of them – distinct? This is where Ghostwriter, an in-house AI tool created by Ubisoft’s R&D department, La Forge, comes in.

Ghostwriter isn’t replacing the video game writer, but instead, alleviating one of the video game writer’s most laborious tasks: writing barks. Ghostwriter effectively generates first drafts of barks – phrases or sounds made by NPCs during a triggered event – which gives scriptwriters more time to polish the narrative elsewhere. Ben Swanson, R&D Scientist at La Forge Montreal, is the creator of Ghostwriter, and remembers the early seeds of it ahead of his presentation of the tech at GDC this year.

[…]

Ghostwriter is the result of conversations with narrative designers who revealed a challenge, one that Ben identified could be solved with an AI tool. Crowd chatter and barks are central features of player immersion in games – NPCs speaking to each other, enemy dialogue during combat, or an exchange triggered when entering an area all provide a more realistic world experience and make the player feel like the game around them exists outside of their actions. However, both require time and creative effort from scriptwriters that could be spent on other core plot items. Ghostwriter frees up that time, but still allows the scriptwriters a degree of creative control.

“Rather than writing first draft versions themselves, Ghostwriter lets scriptwriters select and polish the samples generated,” Ben explains. This way, the tech is a tool used by the teams to support them in their creative journey, with every interaction and feedback originating from the members who use it.

As a summary of its process, scriptwriters first create a character and a type of interaction or utterance they would like to generate. Ghostwriter then proposes a select number of variations which the scriptwriter can then choose and edit freely to fit their needs. This process uses pairwise comparison as a method of evaluation and improvement. This means that, for each variation generated, Ghostwriter provides two choices which will be compared and chosen by the scriptwriter. Once one is selected, the tool learns from the preferred choice and, after thousands of selections made by humans, it becomes more effective and accurate.

[…]

The team’s ambition is to give this AI power to narrative designers, who will be able to eventually create their own AI system themselves, tailored to their own design needs. To do this, they created a user-friendly back-end tool website called Ernestine, which allows anyone to create their own machine learning models used in Ghostwriter. Their hope is that teams consider Ghostwriter before they start their narrative process and create their models with a vision in mind, effectively making the tech an integral part of the production pipeline.

[…]

Source: The Convergence of AI and Creativity: Introducing Ghostwriter

This looks like another excellent way of employing generative AI in a way that eases the life of people doing shitty jobs.

Dashcam App is driving nazi informer wet dream, Sends Video of You Speeding and other infractions Directly to Police

Speed cameras have been around for a long time and so have dash cams. The uniquely devious idea of combining the two into a traffic hall monitor’s dream device was not a potential reality until recently, though. According to the British Royal Automobile Club, such a combination is coming soon. The app, which is reportedly available in the U.K. as soon as May, will allow drivers to report each other directly to the police with video evidence for things like running red lights, failure to use a blinker, distracted driving, and yes, speeding.

Its founder Oleksiy Afonin recently held meetings with police to discuss how it would work. In a nutshell, video evidence of a crime could be uploaded as soon as the driver who captured it stopped their vehicle to do so safely. According to the RAC, the footage could then be “submitted to the police through an official video portal in less than a minute.” Police reportedly were open to the idea of using the videos as evidence in court.

The RAC questioned whether such an app could be distracting. It certainly opens up a whole new world of crime reporting. In some cities, individuals can report poorly or illegally parked cars to traffic police. Drivers getting into the habit of reporting each other for speeding might be a slippery slope, though. The government would be happy to collect the ticket revenue but the number of citations for alleged speeding could be off the charts with such a system. Anybody can download the app and report someone else, but the evidence would need to be reviewed.

The app, called dashcamUK, will only be available in the United Kingdom, as its name indicates. Thankfully, it doesn’t seem like there are any plans to bring it Stateside. Considering the British public is far more open to the use of CCTV cameras in terms of recording crimes than Americans are, it will likely stay that way for that reason, among others.

Source: Strangers Can Send Video of You Speeding Directly to Police With Dashcam App

TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers

[…]

In 2017, the DHS began quietly rolling out its facial recognition program, starting with international airports and aimed mainly at collecting/scanning people boarding international flights. Even in its infancy, the DHS was hinting this was never going to remain solely an international affair.

It made its domestic desires official shortly thereafter, with the TSA dropping its domestic surveillance “roadmap” which now included “expanding biometrics to additional domestic travelers.” Then the DHS and TSA ran silent for a bit, resurfacing in late 2022 with the news it was rolling out its facial recognition system at 16 domestic airports.

As of January, the DHS and TSA were still claiming this biometric ID verification system was strictly opt-in. A TSA rep interviewed by the Washington Post, however, hinted that opting out just meant subjecting yourself to the worst in TSA customer service. Given the options, more travelers would obviously prefer a less brusque/hands-y trip through security checkpoints, ensuring healthy participation in the TSA’s “optional” facial recognition program.

A little more than two months have passed, and the TSA is now informing domestic travelers there will soon be no way to opt out of its biometric program. (via Papers Please)

Speaking at an aviation security panel at South by Southwest, TSA Administrator David Pekoske made these comments:

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “(We’re) upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional, he said.

[…]

Pekoske buries the problematic aspects of biometric harvesting in exchange for domestic travel “privileges” by claiming this is all about making things better for passengers.

“It’s critically important that this system has as little friction as it possibly can, while we provide for safety and security,” Pekoske said.

Yes, you’ll get through screening a little faster. Unless the AI is wrong, in which case you’ll be dealing with a whole bunch of new problems most agents likely won’t have the expertise to handle.

[…]

More travelers. Fewer agents. And a whole bunch of screens to interact with. That’s the plan for the nation’s airports and everyone who passes through them.

Source: TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers | Techdirt

And way more data that hackers can get their hands on and which the government and people who buy the data can use for 1984 type purposes.

Roblox launches its first generative AI game creation tools

Last month, Roblox outlined its vision for AI-assisted content creation, imagining a future where Generative AI could help users create code, 3D models and more with little more than text prompts. Now, it’s taking its first steps toward allowing “every user on Roblox to be a creator” by launching its first AI tools: Code Assist and Material Generator, both in beta.

Although neither tool is anywhere close to generating a playable Roblox experience from a text description, Head of Roblox Studio Stef Corazza told an audience at GDC 2023 that they can “help automate basic coding tasks so you can focus on creative work.” For now, that means being able to generate useful code snippets and object textures based on short prompts. Roblox’s announcement for the tools offers a few examples, generating realistic textures for a “bright red rock canyon” and “stained glass,” or producing several lines of functional code that will that make certain objects change color and self-destruct after a player interacts with them.

[…]

 

Source: Roblox launches its first generative AI game creation tools | Engadget

Big Four publishers move to crush the Internet Archive

On Monday four of the largest book publishers asked a New York court to grant summary judgment in a copyright lawsuit seeking to shut down the Internet Archive’s online library and hold the non-profit organization liable for damages.

The lawsuit was filed back June 1, 2020, by the Hachette Book Group, HarperCollins Publishers, John Wiley & Sons and Penguin Random House. In the complaint [PDF], the publishers ask for an injunction that orders “all unlawful copies be destroyed” in the online archive.

The central question in the case, as summarized during oral arguments by Judge John Koeltl, is: does a library have the right to make a copy of a book that it otherwise owns and then lend the ebook it has made without a license from the publisher to patrons of the library?

Publishers object to the Internet Archive’s efforts to scan printed books and make digital copies available online to readers without buying a license from the publisher. The Internet Archive has filed its own motion for summary judgment to have the case dismissed.

The Internet Archive (IA) began its book scanning project back in 2006 and by 2011 started lending out digital copies. It did so, however, in a way that maintained the limitation imposed by physical book ownership.

This activity is fundamentally the same as traditional library lending and poses no new harm to authors or the publishing industry

Its Controlled Digital Lending (CDL) initiative allows only one person to check out the digital copy of each scanned physical book. The idea is that the purchased physical book is being lent in digital form but no extra copies are being lent. IA presently offers 1.3 million books to the public in digital form.

“This activity is fundamentally the same as traditional library lending and poses no new harm to authors or the publishing industry,” IA argued in answer [PDF] to the publisher’s complaint.

“Libraries have collectively paid publishers billions of dollars for the books in their print collections and are investing enormous resources in digitization in order to preserve those texts. CDL helps them take the next step by making sure the public can make full use of the books that libraries have bought.”

The publishers, however, want libraries to pay for ebooks in addition to the physical books they have purchased already. And they claim they have lost millions in revenue, though IA insists there’s no evidence of the presumptive losses.

“Brewster Kahle, Internet Archive’s founder and funder, is on a mission to make all knowledge free. And his goal is to circulate ebooks to billions of people by transforming all library collections from analog to digital,” said Elizabeth McNamara, attorney for the publishers, during Monday’s hearing.

“But IA does not want to pay authors or publishers to realize this grand scheme and they argue it can be excused from paying the customary fees because what they’re doing is in the public interest.”

Kahle in a statement denounced the publishers’ demands. “Here’s what’s at stake in this case: hundreds of libraries contributed millions of books to the Internet Archive for preservation in addition to those books we have purchased,” he said.

“Thousands of donors provided the funds to digitize them.

“The publishers are now demanding that those millions of digitized books, not only be made inaccessible, but be destroyed. This is horrendous. Let me say it again – the publishers are demanding that millions of digitized books be destroyed.

“And if they succeed in destroying our books or even making many of them inaccessible, there will be a chilling effect on the hundreds of other libraries that lend digitized books as we do.”

[…]

Source: Big Four publishers move to crush the Internet Archive • The Register

US hospital rolls out AI ‘copilot’ for doctors’ paperwork

[…]

The technology, developed by Pittsburgh, Pennsylvania startup Abridge, aims to reduce workloads for clinicians and improve care for patients. Shivdev Rao, the company’s CEO and a cardiologist, told The Register doctors can spend hours writing up notes from their previous patient sessions outside their usual work schedules.

“That really adds up over time, and I think it has contributed in large part to this public health crisis that we have right now around doctors and nurses burning out and leaving the profession.” Clinicians will often have to transcribe audio recordings or recall conversations from memory when writing their notes, she added.

[…]

Abridge’s software automatically generates summaries of medical conversations using AI and natural language processing algorithms. In a short demo, The Register pretended to be a mock patient talking to Rao about suffering from shortness of breath, diabetes, and drinking three bottles of wine every week. Abridge’s software was able to note down things like symptoms, medicines recommended by the doctor, and actions the clinician should follow up on in future appointments.

The code works by listening out for keywords and classifying important information. “If I said take Metoprolol twice, an entity would be Metoprolol, and then twice a day would be an attribute. And if I said by mouth, that’s another attribute. And we could do the same thing with the wine example. Wine would be an entity, and an attribute would be three bottles, and other attribute every night.”

“We’re creating a structured data dataset; [the software is] classifying everything that I said and you said into different categories of the conversation. But then once it’s classified all the information, the last piece is generative.”

At this point, Rao explained Abridge uses a transformer-based model to generate a document piecing together the classified information into short sentences under various subsections describing a patient’s previous history of illness, future plans or actions to take.

[…]

Physicians can edit the notes further, whilst patients can access them in an app. Rao likened Abridge’s technology to a copilot, and was keen to emphasize that doctors remain in charge, and should check and edit the generated notes if necessary. Both patients and doctors also have access to recordings of their meetings, and can click on specific keywords to have the software play back parts of the audio when the specific word was uttered during their conversation.

“We’re going all the way from the summary we put in front of users and we’re tracing it back to the ground truth of the conversation. And so if I have a conversation, and I couldn’t recall something happening, I can always double-check that this wasn’t a hallucination. There are models in between that are making sure to not expose something that was not discussed.”

[…]

Source: US hospital rolls out AI ‘copilot’ for doctors’ paperwork • The Register

Microsoft Adds DALL-E AI Image Generator to Bing

Microsoft on Tuesday announced that it is using an advanced version of Open AI’s DALL-E image generator to power its own Bing and Edge browser. Like DALL-E before it, the newly announced Bing Image Creator will generate a set of images for users based on a line of written text. The addition of image content in Bing further entrenches its early lead against competitors in Big Tech’s rapidly evolving race for AI dominance. Google announced it opened access to its Bard chatbot the same day, nearly a month after Microsoft added ChatGPT to Bing.

“By typing in a description of an image, providing additional context like location or activity, and choosing an art style, Image Creator will generate an image from your own imagination,” Microsoft head of consumer marketing Yusuf Mehdi said in a statement. “It’s like your creative copilot.”

For the Edge browser, Microsoft says its new Image creator will appear as a new icon in the Edge sidebar

[…]

Source: Microsoft Adds DALL-E AI Image Generator to Bing

Sign up to try the new AI Bard from Google

Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.

About Bard

Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. It’s grounded in Google’s understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.

While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas…but it got some things wrong, like the scientific name for the ZZ plant.

[…]

Source: Sign up to try Bard from Google

Purported Chinese warships interfering with passenger planes

Australian airline Qantas issued standing orders to its pilots last week advising them that some of its fleet experienced interference on VHF stations from sources purporting to be the Chinese Military.

The Register has confirmed the reports.

The interference has been noticed in the western Pacific and South China Sea. Qantas has advised its crew to continue their assigned path and report interference to the controlling air traffic control authority.

The airline also has stated there have been no reported safety events.

Qantas_China_interference

Qantas operations order – Click to enlarge

Qantas’ warning follows a similar one from the International Federation of Air Line Pilots’ Associations (IFALPA) issued on March 2nd.

IFALPA said it “been made aware of some airlines and military aircraft being called over 121.50 or 123.45 by military warships in the Pacific region, notably South China Sea, Philippine Sea, East of Indian Ocean.” According to the org, some flights contacted by the warships were provided vectors to avoid the airspace.

But while interfering with VHF can be disruptive, what is more concerning is the IFALPA said it has “reason to believe there may be interferences to GNSS and RADALT as well.”

RADLT is aviation jargon for radar altimeter – an instrument that tells pilots how far they are above ground. So they can avoid hitting it. GNSS is the Global Navigation Satellite System.

GNSS Jamming navigation systems or radar altimeters can greatly disorientate a pilot or worse.

Of course, there is no telling if China is merely testing out its capabilities, performing these actions as a show of power, or has a deeper motive.

IFALPA recommended pilots who experience interference do not respond to warships, notify dispatchers and relevant air traffic control, and complete necessary reports.

China has asserted more control over Asia Pacific waters. Outgoing Micronesian president David Panuelo recently accused Beijing of sending warnings to stay away from its ships when entered his country’s territory. In an explosive letter, Panuelo said China also attempted to take control of the nation’s submarine cables and telecoms infrastructure.

Source: Purported Chinese warships interfering with passenger planes • The Register

RGB on your PC – OEM bloatware alternatives tested (with an ASUS)

RGB on your PC is cool, it’s beautiful and can be quite nuts but it’s also quite complex and trying to get it to do what you want it to isn’t always easy. This article is the result of many many reboots and much Googling.

I set up a PC with 2×3 Lian Li Unifan SL 120 (top and side), 2 Lian Li Strimmer cables (an ATX and a PCIe), a NZXT Kraken Z73 CPU cooler (with LED screen, but cooled by the Lian Li Unifan SL 120 on the side, not the NZXT fans that came with it), 2 RGB DDR5 DRAMs, an ASUS ROG Geforce 2070 RTX Super, a Asus ROG Strix G690-F Gaming wifi and a Corsair K95 RGB Keyboard.

Happy rainbow colours! It seems to default to this every time I change stuff

It’s no mean feat doing all the wiring on the fan controllers nowadays, and the instructions don’t make it much easier. Here is the wiring setup for this (excluding the keyboard)

The problem is that all of this hardware comes with it’s own bloated, janky software in order to get it to do stuff.

ASUS: Armory Crate / ASUS AURA

This thing takes up loads of memory and breaks often.

I decided to get rid of it once it had problems updating my drivers. You can still download Aura seperately (although there is a warning it will no longer be updated). To uninstall Armory Crate you can’t just uninstall everything from Add or Remove Programs, you need the uninstall tool, so it will also get rid of the scheduled tasks and a directory the windows uninstallers leave behind.

Once you install Aura seperately, it still takes an inane amount of processes, but you don’t actually need to run Aura to change the RGBs on the VGA and DRAM. Oddly enough not the motherboard itself though.

Just running AURA, not Armory Crate

You also can use other programs. Theoretically. That’s what the rest of this article is about. But in the end, I used Aura.

If you read on, it may be the case that I can’t get a lot of the other stuff to work because I don’t have Armory Crate installed. Nothing will work if I don’t have Aura installed, so I may as well use that.

Note: if you want to follow your driver updates, there’s a thread on the Republic of Gamers website that follows a whole load of them.

Problem I never solved: getting the Motherboard itself to show under Aura.

Corsiar: iCUE

Yup, this takes up memory, works pretty well, keeps updating for no apparent reason and I have to slide the switch left and right to get it to detect as a USB device quite often so the lighting works again. In terms of interface it’s quite easy to use.

Woohoo! all these processes for keyboard lighting!

It detects the motherboard and can monitor the motherboard, but can’t control the lighting on it. Once upon a time it did. Maybe this is because I’m not running the whole Armory Crate thing any more.

No idea.

Note: if you do put everything on in the dashboard, memory usage goes up to 500 MB

In fact, just having the iCUE screen open uses up ~200MB of memory.

It’s the most user friendly way of doing keyboard lighting effects though, so I keep it.

OpenRGB

This is the open source alternative that works on Windows and Linux. Yay! Gitlab page is here

When I first started running it, it told me I needed to run it as an administrator to get a driver working. I ran it and it hung my computer at device detection. Later on it started rebooting it. After installing the underlying Asus Aura services running it ran for me. [Note: the following is for the standard 0.8 build: Once. It reboots my PC after device detection now. Lots of people on Reddit have it working, maybe it needs the Aura Crate software. I have opened an issue, hopefully it will get fixed? According to a Reddit user, this could be because “If you have armoury crate installed, OpenRGB cannot detect your motherboard, if your ram is ddr5 [note: which mine is], you’ll gonna have to wait or download the latest pipeline version”]

OK, so the Pipeline build does work and even detects my motherboard! Unfortunately it doesn’t write the setting to the motherboard, so after a reboot it goes back to rainbow. After my second attempt the setting seems to have stuck and survived the reboot. However it still hangs the computer on a reboot (everything turns off except the PC itself) and It can take quite some time to open the interface. It also sometimes does and sometimes doesn’t detect the DRAM modules. Issue opened here

Even with the interace open, the memory footprint is tiny!

Note that it saves the settings to C:\Users\razor\AppData\Roaming\OpenRGB an you can find the logs there too.

SignalRGB

This looks quite good at first glance – it detected my devices and was able to apply effects to all of them at once. Awesome! Unfortunately it has a huge memory footprint (around 600MB!) and doesn’t write the settings to the devices, so if after a reboot you don’t run SignalRGB the hardware won’t show any lighting at all, they will all be turned off.

It comes in a free tier with mostly anything you need and a paid subscription tier, which costs $4,- per month = $48,- per year! Considering what this does and the price of most of these kind of one trick pony utils (one time fee ~ $20) this is incredibly high. On Reddit the developers are aggressive in saying they need to keep developing in order to support new hardware and if you think they are charging a lot of money for this you are nuts. Also, in order to download the free effects you need an account with them.

So nope, not using this.

JackNet RGBSync

Another Open Source RGB software, I got it to detect my keyboard and not much else. Development has stopped in 2020. The UI leaves a lot to be desired.

Gigabyte RGB Fusion

Googling alternatives to Aura, you will run into this one. It’s not compatible with my rig and doesn’t detect anything. Not really too surprising, considering my stuff is all their competitor, Asus.

L-Connect 2 and 3

For the Lian Li fans and the Strimmer cables I use L-Connect 2. It has a setting saying it should take over the motherboard setting, but this has stopped working. Maybe I need Armory Crate. It’s a bit clunky (to change settings you need to select which fans in the array you want to send an effect to and it always shows 4 arrays of 4 fans, which I don’t actually have), but it writes settings to the devices so you don’t need it running in the background.

L-Connect 3 runs extremely slowly. It’s not hung, it’s just incredibly slow. Don’t know why, but could be Armory Crate related.

NZXT CAM

This you need in the background or the LED screen on the Kraken will show the default: CPU temperature only. It takes a very long time to start up. It also requires quite a bit of memory to run, which is pretty bizarre if all you want to do is show a few animated GIFs on your CPU cooler in carousel mode

Interface up on the screen
Running in the background

So, it’s shit but you really really need it if you want the display on the CPU cooler to work.

Fan Control

So not really RGB, but related, is Fan Control for Windows

Also G-helper works for fan control and gpu switching

Conclusion

None of the alternatives really works very well for me. None of them can control the Lian-Li strimmer devices and most of them only control a few of them or have prohibitive licenses attached for what they are. What is more, in order to use the alternatives, you still need to install the ASUS motherboard driver, which is exactly what I had been hoping to avoid. OpenRGB shows the most promise but is still not quite there yet – but it does work for a lot of people, so hopefully this will work for you too. Good luck and prepare to reboot… A lot!

Qubits put new spin on magnetism: Boosting applications of quantum computers

[…] “With the help of a quantum annealer, we demonstrated a new way to pattern ,” said Alejandro Lopez-Bezanilla, a virtual experimentalist in the Theoretical Division at Los Alamos National Laboratory. Lopez-Bezanilla is the corresponding author of a paper about the research in Science Advances.

“We showed that a magnetic quasicrystal lattice can host states that go beyond the zero and one bit states of classical information technology,” Lopez-Bezanilla said. “By applying a to a finite set of spins, we can morph the magnetic landscape of a quasicrystal object.”

[…]

Lopez-Bezanilla selected 201 on the D-Wave computer and coupled them to each other to reproduce the shape of a Penrose quasicrystal.

Since Roger Penrose in the 1970s conceived the aperiodic structures named after him, no one had put a spin on each of their nodes to observe their behavior under the action of a magnetic field.

“I connected the qubits so all together they reproduced the geometry of one of his quasicrystals, the so-called P3,” Lopez-Bezanilla said. “To my surprise, I observed that applying specific external magnetic fields on the structure made some qubits exhibit both up and down orientations with the same probability, which leads the P3 to adopt a rich variety of magnetic shapes.”

Manipulating the interaction strength between qubits and the qubits with the external field causes the quasicrystals to settle into different magnetic arrangements, offering the prospect of encoding more than one bit of information in a single object.

Some of these configurations exhibit no precise ordering of the qubits’ orientation.

“This can play in our favor,” Lopez-Bezanilla said, “because they could potentially host a quantum quasiparticle of interest for .” A spin quasiparticle is able to carry information immune to external noise.

A quasiparticle is a convenient way to describe the collective behavior of a group of basic elements. Properties such as mass and charge can be ascribed to several spins moving as if they were one.

More information: Alejandro Lopez-Bezanilla, Field-induced magnetic phases in a qubit Penrose quasicrystal, Science Advances (2023). DOI: 10.1126/sciadv.adf6631. www.science.org/doi/10.1126/sciadv.adf6631

Source: Qubits put new spin on magnetism: Boosting applications of quantum computers

AI-generated art may be protected, says US Copyright Office – requires meaningful creative input from a human

[…]

AI software capable of automatically generating images or text from an input prompt or instruction has made it easier for people to churn out content. Correspondingly, the USCO has received an increasing number of applications to register copyright protections for material, especially artwork, created using such tools.

US law states that intellectual property can be copyrighted only if it was the product of human creativity, and the USCO only acknowledges work authored by humans at present. Machines and generative AI algorithms, therefore, cannot be authors, and their outputs are not copyrightable.

Digital art, poems, and books generated using tools like DALL-E, Stable Diffusion, Midjourney, ChatGPT, or even the newly released GPT-4 will not be protected by copyright if they were created by humans using only a text description or prompt, USCO director Shira Perlmutter warned.

“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” she wrote in a document outlining copyright guidelines.

“For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology – not the human user.

“Instead, these prompts function more like instructions to a commissioned artist – they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”

The USCO will consider content created using AI if a human author has crafted something beyond the machine’s direct output. A digital artwork that was formed from a prompt, and then edited further using Photoshop, for example, is more likely to be accepted by the office. The initial image created using AI would not be copyrightable, but the final product produced by the artist might be.

Thus it would appear the USCO is simply saying: yes, if you use an AI-powered application to help create something, you have a reasonable chance at applying for copyright, just as if you used non-AI software. If it’s purely machine-made from a prompt, you need to put some more human effort into it.

In a recent case, officials registered a copyright certificate for a graphic novel containing images created using Midjourney. The overall composition and words were protected by copyright since they were selected and arranged by a human, but the individual images themselves were not.

“In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form’. The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry,” the USCO declared.

Perlmutter urged people applying for copyright protection for any material generated using AI to state clearly how the software was used to create the content, and show which parts of the work were created by humans. If they fail to disclose this information accurately, or try to hide the fact it was generated by AI, USCO will cancel their certificate of registration and their work may not be protected by copyright law.

Source: AI-generated art may be protected, says US Copyright Office • The Register

So very slowly but surely the copyrighters are starting to understand what this newfangled AI technology is all about.

So what happens when an AI edits and AI generated artwork?

SCOPE Europe becomes the accredited monitoring body for a Dutch national data protection code of conduct

[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.

When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.

The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.

Source: PRESS RELEASE: SCOPE Europe becomes the accredited monitoring body for a Dutch national code of conduct: SCOPE Europe bvba/sprl

Civitai / stable diffusion

CivitAI is an AI image generator that isn’t hosted in the US, allowing for much more freedom of creation. It’s a really amazing system that gives Midjourney and DALL-E a run for their money.

Civitai is a platform that makes it easy for people to share and discover resources for creating AI art. Our users can upload and share custom models that they’ve trained using their own data, or browse and download models created by other users. These models can then be used with AI art software to generate unique works of art.

Cool, what’s a “Model?”

Put simply, a “model” refers to a machine learning algorithm or set of algorithms that have been trained to generate art or media in a particular style. This can include images, music, video, or other types of media.

To create a model for generating art, a dataset of examples in the desired style is first collected and used to train the model. The model is then able to generate new art by learning patterns and characteristics from the examples it was trained on. The resulting art is not an exact copy of any of the examples in the training dataset, but rather a new piece of art that is influenced by the style of the training examples.

Models can be trained to generate a wide range of styles, from photorealistic images to abstract patterns, and can be used to create art that is difficult or time-consuming for humans to produce manually.

Source: What the heck is Civitai? | Civita