Contemplating art’s beauty found to boost abstract and ‘big picture’ thinking

[…] a new study from the University of Cambridge suggests that stopping to contemplate the beauty of artistic objects in a gallery or museum boosts our ability to think in abstract ways and consider the “bigger picture” when it comes to our lives.

Researchers say the findings offer that engaging with artistic beauty helps us escape the “mental trappings of daily life,” such as current anxieties and to-do lists, and induce “psychological distancing”: the process of zooming out on your thoughts to gain clarity.

[…]

Researchers found that study participants who focused on the beauty of objects in an exhibition of ceramics were more likely to experience elevated psychological states enabling them to think “beyond the here and now,” and more likely to report feeling enlightened, moved, or transformed.

This was compared to participants who were simply asked to look intently at the artistic objects to match them with a series of line drawings. The findings are published in the journal Empirical Studies of the Arts.

[…]

“Our research indicates that engaging with the beauty of art can enhance abstract thinking and promote a different mindset to our everyday patterns of thought, shifting us into a more expansive state of mind.”

“This is known as psychological distancing, when one snaps out of the mental trappings of daily life and focuses more on the overall picture.”

[…]

Participants were randomly split into two groups: the ‘beauty’ group was asked to actively consider and then rate the beauty of each ceramic object they viewed, while the second group just matched a line drawing of the object with the artwork itself.

All participants were then tested on how they process information, and if it’s in a more practical or abstract way. For example, does ‘writing a letter’ mean literally putting pen to paper or sharing your thoughts? Is ‘voting’ marking a ballot or influencing an election? Is ‘locking a door’ inserting a key or securing a house?

“These tests are designed to gauge whether a person is thinking in an immediate, procedural way, as we often do in our day-to-day lives, or is attuned to the deeper meaning and bigger picture of the actions they take,” said Dr. Elzė Sigutė Mikalonytė, lead author of the study and a researcher at Cambridge’s Department of Psychology.

Across all participants, those in the beauty group scored almost 14% higher on average than the control group in abstract thinking. While they were told the study was about cognitive processes, participants were asked about interests, with around half saying they had an artistic hobby.

Among those, the effect was greater: those with an artistic hobby in the ‘beauty’ group scored over 25% higher on average for abstract thinking than those with an artistic hobby in the control group.

[…]

Emotional states of participants were also measured by asking about their feelings while completing the gallery task. Across all participants, those in the beauty group reported an average of 23% higher levels of “transformative and self-transcendent feelings”—such as feeling moved, enlightened and inspired—than the control group.

“Our findings offer empirical support for a long-standing philosophical idea that beauty appreciation can help people detach from their immediate practical concerns and adopt a broader, more abstract perspective,” said Mikalonytė.

Importantly, however, the beauty group did not report feeling any happier than the , suggesting that it was the engagement with beauty that influenced abstract thinking, rather than any overall positivity from the experience.

The latest study is part of a wider project led by Schnall exploring the effects of aesthetic experiences on cognition

[…]

Source: Contemplating art’s beauty found to boost abstract and ‘big picture’ thinking

Adobe’s Procreate-like Digital Painting App Is Now Free for Everyone – and offers AI options

Adobe tools like Photoshop and Illustrator are household names for creative professionals on Mac and PC (though Affinity is trying hard to steal those paying customers). But now, Adobe is gunning for the tablet drawing and painting market by making its Fresco digital painting app completely free.

While Photoshop and Illustrator are on iPad, Procreate has instead become the go-to for digital creators there. This touch-first app was designed for creating digital art and simulating real-world materials. You can switch between hundreds of brush or pencil styles with just a single flick of the Apple Pencil, and while there are other competing apps like Clip Studio Paint (also available on desktop), its $12.99 one-time fee makes it an attractive buy.

Released in 2019, the Fresco app, Adobe’s drawing app for iPadOS, iOS, and Windows, attempted to even the playing field where Photoshop couldn’t, but only provided access to basic features for free. A $10/year subscription provided you with access to over a 1,000 additional brushes, more online storage, additional shapes, access to Adobe’s premium fonts collection, and most importantly, the ability to import custom brushes. Now, you get all of these for free on all supported platforms.

Even with this move, Adobe still has an uphill battle against other tablet apps that are already hugely popular in digital art communities and on social media. Procreate makes it quite easy to share, import, and customize brushes and templates online, giving it a lot of community support. Procreate is also very vocal about not using Generative AI in its products and keeping the app creator-friendly. With its influx of Generative AI tools elsewhere in the Creative Cloud, Adobe cannot make that promise, which could turn some away even if Fresco itself has yet to get any AI functionality.

What Fresco brings to the table is the Adobe ecosystem. It uses a very similar interface to other Adobe tools like Photoshop and Illustrator, making Adobe’s users feel at home. You can even use Photoshop brushes with it. Files are saved to Creative Cloud storage and are backed up automatically, making sure you never lose any data. Procreate, on the other hand, stores files locally, which makes it easier to lose them. Procreate is also exclusive to the iPad and iPhones (through the stripped-down Procreate Pocket) while Fresco works with Windows, too.

It’s unclear whether all of that is enough to help Adobe overtake years of hardline Procreate support, but given how popular Photoshop is among artists elsewhere, Fresco could now start to see some use as a lighter, free Photoshop alternative. At any rate, it’s worth trying out, although there’s no word on Android or MacOS versions.

Source: Adobe’s Procreate-like Digital Painting App Is Now Free for Everyone | Lifehacker

So Procreate probably doesn’t have the programming chops to build the AI additions that people want. Even the anti-AI artists who are vocal are a small minority, to for Procreate to bend to this crowd is a losing strategy.

After 33 Years, GameStop Shuts Down And Disappears ‘Game Informer’

[…]

Nobody is going to let decades of journalistic output just suddenly get disappeared out of nowhere… right?

When it comes to Game Informer, the GameStop owned video game magazine that has been in production for over three decades, that’s exactly what just happened.

Staff at the magazine, which also publishes a website, weekly podcast, and online video documentaries about game studios and developers, were all called into a meeting on Friday with parent company GameStop’s VP of HR. In it they were told the publication was closing immediately, they were all laid off, and would begin receiving severance terms. At least one staffer was in the middle of a work trip when the team was told.

The sudden closure of Game Informer means that issue number 367, the outlet’s Dragon Age: The Veilguard cover story, will be its last. The entire website has been taken offline as well.

This isn’t link rot. It’s link decapitation. Every single URL from the Game Informer website now points only to the main site URL, with the following message posted on it.

After 33 thrilling years of bringing you the latest news, reviews, and insights from the ever-evolving world of gaming, it is with a heavy heart that we announce the closure of Game Informer.

From the early days of pixelated adventures to today’s immersive virtual realms, we’ve been honored to share this incredible journey with you, our loyal readers. While our presses may stop, the passion for gaming that we’ve cultivated together will continue to live on.

Thank you for being part of our epic quest, and may your own gaming adventures never end.

Barring anyone with physical copies of the magazine, or those that created their own online scans of those magazines, or whatever you can still get out of the Internet Archive, it’s all just gone. Thousands of articles and features, millions of words of journalistic output, simply erased. Even the ExTwitter account for the publication has been disappeared, even after it was used to post the same message as on the website. What you will see if you go that link for the disappeared tweet is an outpouring of sadness from all sorts of folks, including famed voice actors, content creators like Mega Ran, and even game studios, all eulogizing the beloved magazine.

And it seems that this shut down, almost certainly at the hands of CEO Ryan Cohen, occurred without any opportunity for those who produced all of this content to take backups for archive purposes.

[…]

And, because cultural disasters like this tend to be sprinkled with at least a dash of irony:

A recent in-depth feature on the retro game studio Digital Eclipse about gaming’s history and preservation is one of the stories that is no longer accessible. A write-up about Game Informer’s famous game vault, containing releases from across its decades long history, is also inaccessible.

So a gaming journalism outfit failed to preserve its own features on game preservation. That would actually be funny if it weren’t so infuriating.

Source: After 33 Years, GameStop Shuts Down And Disappears ‘Game Informer’ | Techdirt

Wow, #GME you have lost a diamondhand. I no longer believe in this stonk.

Posted in Art

MTV News Website Goes Dark, Archives Pulled Offline – this is why we need online libraries

More than two decades’ worth of content published on MTVNews.com is no longer available after MTV appears to have fully pulled down the site and its related content. Content on its sister site, CMT.com, seems to have met a similiar fate.

In 2023, MTV News was shuttered amid the financial woes of parent company Paramount Global. As of Monday, trying to access MTV News articles on mtvnews.com or mtv.com/news resulted in visitors being redirected to the main MTV website.

The now-unavailable content includes decades of music journalism comprising thousands of articles and interviews with countless major artists, dating back to the site’s launch in 1996. Perhaps the most significant loss is MTV News’ vast hip-hop-related archives, particularly its weekly “Mixtape Monday” column, which ran for nearly a decade in the 2000s and 2010s and featured interviews, reviews and more with many artists, producers and others early in their careers.

Former MTV News staffers posted on social media about the website shutdown and the scrubbing of the archives. “So, mtvnews.com no longer exists. Eight years of my life are gone without a trace,” Patrick Hosken, former music editor for MTV News, wrote on X. “All because it didn’t fit some executives’ bottom lines. Infuriating is too small a word.”

“sickening (derogatory) to see the entire @mtvnews archive wiped from the internet,” Crystal Bell, culture editor at Mashable and one-time entertainment director of MTV News, posted on X. “decades of music history gone…including some very early k-pop stories.”

“This is disgraceful. They’ve completely wiped the MTV News archive,” longtime Rolling Stone senior writer Brian Hiatt commented. “Decades of pop culture history research material gone, and why?”

Last week, Paramount Global’s CMT website similarly pulled its repository of country-music journalism dating back several decades.

Reps for MTV did not respond to requests for comment Monday.

Some observers noted that MTV News articles may be available through internet archiving services like the Wayback Machine, but according to Hiatt older MTV News articles do not show up via Wayback Machine.

In May 2023, Paramount Global shut down MTV News — which had already been severely downsized by layoffs in recent years — coming amid a 25% reduction in workforce across the Showtime/MTV Entertainment Studios and Paramount Media Networks groups in the U.S. The group is headed by president-CEO Chris McCarthy, who in late April was named one of the three co-CEOs running Paramount Global’s “Office of the CEO.”

MTV News began in the late ’80s with “The Week in Rock,” a show hosted by Kurt Loder, who became the first MTV News anchor.

Source: MTV News Website Goes Dark, Archives Pulled Offline

In the meantime, publishers go about trying to kill things that store our digital history, such as the Internet Archive.

500,000 Books Have Been Deleted From The Internet Archive’s Lending Library by Greedy Publishers

Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement – a library is a library, whether it’s paper or digital

RIAA Attempts To Kill The World’s Greatest Library whilst it is down: Sues Internet Archive For Making It Possible To Hear Old 78s

Posted in Art

How private equity has used copyright to cannibalise the past at the expense of the future

Walled Culture has been warning about the financialisation and securitisation of music for two years now. Those obscure but important developments mean that the owners of copyrights are increasingly detached from the creative production process. They regard music as just another asset, like gold, petroleum or property, to be exploited to the maximum. A Guest Essay in the New York Times points out one of the many bad consequences of this trend:

Does that song on your phone or on the radio or in the movie theater sound familiar? Private equity — the industry responsible for bankrupting companies, slashing jobs and raising the mortality rates at the nursing homes it acquires — is making money by gobbling up the rights to old hits and pumping them back into our present. The result is a markedly blander music scene, as financiers cannibalize the past at the expense of the future and make it even harder for us to build those new artists whose contributions will enrich our entire culture.

As well as impoverishing our culture, the financialisation and securitisation of music is making life even harder for the musicians it depends on:

In the 1990s, as the musician and indie label founder Jenny Toomey wrote recently in Fast Company, a band could sell 10,000 copies of an album and bring in about $50,000 in revenue. To earn the same amount in 2024, the band’s whole album would need to rack up a million streams — roughly enough to put each song among Spotify’s top 1 percent of tracks. The music industry’s revenues recently hit a new high, with major labels raking in record earnings, while the streaming platforms’ models mean that the fractions of pennies that trickle through to artists are skewed toward megastars.

Part of the problem is the extremely low rates paid by streaming services. But the larger issue is the power imbalance within all the industries based on copyright. The people who actually create books, music, films and the rest are forced to accept bad deals with the distribution companies. Walled Culture the book (free ebook versions) details the painfully low income the vast majority of artists derive from their creativity, and how most are forced to take side jobs to survive. This daily struggle is so widespread that it is no longer remarked upon. It is one of the copyright world’s greatest successes that the public and many creators now regard this state of affairs as a sad but unavoidable fact of life. It isn’t.

The New York Times opinion piece points out that there are signs private equity is already moving on to its next market/victim, having made its killing in the music industry. But one thing is for sure. New ways of financing today’s exploited artists are needed, and not ones cooked up by Wall Street. Until musicians and creators in general take back control of their works, rather than acquiescing in the hugely unfair deal that is copyright, it will always be someone else who makes most of the money from their unique gifts.

Source: How private equity has used copyright to cannibalise the past at the expense of the future – Walled Culture

Of course, the whole model of continously making money from a single creating is a bit fucked up. If a businessman were to ask for money every time someone read their email that would be plain stupid. How is this any different?

Song lyrics really are getting simpler, more repetitive

You’re not just getting older. Song lyrics really are becoming simpler and more repetitive, according to a study published on Thursday.

Lyrics have also become angrier and more self-obsessed over the last 40 years, the study found, reinforcing the opinions of cranky aging music fans everywhere.

A team of European researchers analyzed the words in more than 12,000 English-language songs across the genres of rap, country, pop, R&B and from 1980 to 2020.

[…]

For the study in the journal Scientific Reports, the researchers looked at the emotions expressed in lyrics, how many different and complicated words were used, and how often they were repeated.

[…]

The results also confirmed previous research which had shown a decrease in positive, joyful lyrics over time and a rise in those that express anger, disgust or sadness.

Lyrics have also become much more self-obsessed, with words such as “me” or “mine” becoming much more popular.

‘Easier to memorize’

The number of repeated lines rose most in rap over the decades, Zangerle said—adding that it obviously had the most lines to begin with.

“Rap music has become more angry than the other genres,” she added.

The researchers also investigated which songs the fans of different genres looked up on the lyric website Genius.

Unlike other genres, rock fans most often looked up lyrics from older songs, rather than new ones.

Rock has tumbled down the charts in recent decades, and this could suggest fans are increasingly looking back to the genre’s heyday, rather than its present.

Another way that music has changed is that “the first 10-15 seconds are highly decisive for whether we skip the song or not,” Zangerle said.

Previous research has also suggested that people tend to listen to music more in the background these days, she added.

Put simply, songs with more choruses that repeat basic appear to be more popular.

“Lyrics should stick easier nowadays, simply because they are easier to memorize,” Zangerle said.

“This is also something that I experience when I listen to the radio.”

More information: Eva Zangerle, Song lyrics have become simpler and more repetitive over the last five decades, Scientific Reports (2024). DOI: 10.1038/s41598-024-55742-x. www.nature.com/articles/s41598-024-55742-x

Source: Song lyrics are getting simpler, more repetitive: Study

Posted in Art

Lamborghini Is the Latest to Fall Victim to the Flat Logo Trend. Kills one of the most recognisable logos in the world

Would it surprise you to know that there are still some automotive brands out there that haven’t drained the texture and depth out of their famous logos yet? Lamborghini was actually one of those storied marques that hadn’t responded to the so-called digital revolution up until now and, I think at this point, you would’ve just chalked it up to Sant’Agata not really caring about stuff like that, because they’re freaking Lamborghini. But it’s Thursday, March 28, 2024, and the originator of Italian wedges on wheels has a “new” logo that’s a lot like their old one, only flat and with a typeface best described as looking like it was lifted from Google’s free collection.

This is Lambo’s latest logo, and I’ll tell you where my mind went straight away: the Brooklyn Nets! It looks like the shield for the basketball team Jay-Z used to have a stake in, especially in that black-and-white getup. The brand says that additionally, for the first time in its history, its raging bull will be separated from those borders in some uses, particularly on “digital touchpoints.” No example of that’s been provided yet, but you can imagine what that’ll look like.

Lamborghini’s announcement of the change also mentions a new custom typeface “that echoes the unmistakable lines and angularity of the cars.” I don’t know what that means, especially because the mockups the company’s shared with us have a variety of typefaces, and there’s no obvious way to know which, precisely, the press release is referring to. The one on the logo does look a lot like Google’s Roboto to me at first glance—which happens to be used on Lambo’s media portal—but it isn’t. In any case, it feels like a step back in terms of individuality, but that’s why these adjustments happen, after all. Even Lamborghini is concerned about falling behind the times.

Can you tell I’m just not feeling it? The whole “flat design” thing has been kicking around since like 2013, and some automakers, ever on the cutting edge of visual art, are only catching up to it now. The monochromatic look is often justified for its readability particularly on screens, but was anyone really having a hard time identifying Lambo’s shield and bull before? The way pretty much every brand has gone about this is to take their existing insignias and uncheck the blending options box on Photoshop, and listen, it just never results in anything interesting.

If you’ve gotta go flat, you should move to something that looks interesting and complete, flat. That’s what Honda’s done with the new treatment for its 0 Series EVs seen below, and I think it’s genius. The slashed zero looks like something I’d see in some kind of subtly unsettling futuristic Japanese story-driven action game, and the fact it also works as a skewed “H” is just so dang clever. Paul Rand’s Ford logo is another example of flatness with purpose, as it still looks progressive almost 60 years on.

Honda's clever logo for its upcoming 0 Series EVs.

Honda’s clever logo for its upcoming 0 Series EVs. Honda

What Lamborghini’s done here is far from the worst automotive logo redesign I’ve seen yet; that distinction would have to go to Peugeot or Citroën, which not only went for something unremarkable but obviously tried way too hard to come across as futuristic and aggressive. The only thing worse than being boring is lame. Lamborghini was never going to reach as far, because it doesn’t have to. But like Ferrari, it should know by now that the hardest power move you can make as an iconic brand is to never change, especially when everyone else does.

Source: Lamborghini Is the Latest to Fall Victim to the Flat Logo Trend

So it looks like the company, which has a pretty awesome design aesthetic , has found someone’s son’s marketing company, and spent a huge amount of money on a counter productive and very poorly executed brand campaign. So it’s not only insulting that they damaged the logo, but they did so inconsistently and badly. And the most important questions: why? what do they hope to achieve by changing? have not been asked.

Posted in Art

Rooster Teeth (Red vs Blue) Shut Down By WB Discovery After Two Decades

a space helmet half red and half blue

Rooster Teeth, a Warner Bros. Discovery Global Streaming & Interactive Entertainment subsidiary, is ending operations after 20+ years. The news was announced on March 6 in a company memo and blog post on the digital content creator’s site.

Earlier today, the news of Rooster Teeth shutting down was first shared at an all-hands company meeting followed by an internal memo from RT’s general manager, Jordan Levin. This memo was then posted alongside a message from community director Chelsea Atkinson confirming that the site was winding down, and adding that a livestream about the shutdown was planned for tomorrow, March 7.

“Since inheriting ownership and control of Rooster Teeth from AT&T following its acquisition of TimeWarner, Warner Bros. Discovery continued its investment in our company, content, and community,” said Levin in the memo.

“Now however, it’s with a heavy heart I announce that Rooster Teeth is shutting down due to challenges facing digital media resulting from fundamental shifts in consumer behavior and monetization across platforms, advertising, and patronage.”

[…]

Rooster Teeth started back in 2003 in Texas. It was founded by Burnie Burns, Matt Hullum, Geoff Ramsey, Jason Saldaña, Gus Sorola, and Joel Heyman. The company’s first big hit was the Halo machinima series, Red Vs. Blue. That show would become incredibly popular, leading to millions of views, DVDs, spin-offs, and loads of merchandise. Elijah Wood even had a role in one season. The show’s 19th and final season is still set to arrive later this year.

[…]

Source: Rooster Teeth Shut Down By WB Discovery After Two Decades

Posted in Art

OpenAI latest to add ‘Made by AI’ metadata to model output

Images emitted by OpenAI’s generative models will include metadata disclosing their origin, which in turn can be used by applications to alert people to the machine-made nature of that content.

Specifically, the Microsoft-championed super lab is, as expected, adopting the Content Credentials specification, which was devised by the Coalition for Content Provenance and Authenticity (C2PA), an industry body backed by Adobe, Arm, Microsoft, Intel, and more.

Content Credentials is pretty simple and specified in full here: it uses standard data formats to store within media files details about who made the material and how. This metadata isn’t directly visible to the user and is cryptographically protected so that any unauthorized changes are obvious.

Applications that support this metadata, when they detect it in a file’s contents, are expected to display a little “cr” logo over the content to indicate there is Content Credentials information present in that file. Clicking on that logo should open up a pop-up containing that information, including any disclosures that the stuff was made by AI.

The C2PA mark as applied by OpenAI

How the C2PA ‘cr’ logo might appear on an OpenAI-generated image in a supporting app. Source: OpenAI

The idea being here that it should be immediately obvious to people viewing or editing stuff in supporting applications – from image editors to web browsers, ideally – whether or not the content on screen is AI made.

[…]

the Content Credentials strategy isn’t foolproof as we’ve previously reported. The metadata can be easily stripped out or exported without it, or the “cr” cropped out of screenshots, so no “cr” logo will appear on the material in future in any applications. It also relies on apps and services to support the specification, whether they are creating or displaying media.

To work at scale and gain adoption, it also needs some kind of cloud system that can be used to restore removed metadata, which Adobe happens to be pushing, as well as a marketing campaign to spread brand awareness. Increase its brandwidth, if you will.

[…]

n terms of file-size impact, OpenAI insisted that a 3.1MB PNG file generated by its DALL-E API grows by about three percent (or about 90KB) when including the metadata.

[…]

Source: OpenAI latest to add ‘Made by AI’ metadata to model output • The Register

It’s a decent enough idea, a bit like an artist signing their works. Just hopefully it won’t look so damn ugly as in the example and each AI will have their own little logo.

Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It

two people holding hands watching a pc screen. On the screen is a robot painting a digitised Bob Ross paintingA year ago, I noted that many of Walled Culture’s illustrations were being produced using generative AI. During that time, AI has developed rapidly. For example, in the field of images, OpenAI has introduced DALL-E 3 in ChatGPT:

When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

Ars Technica has written a good intro to the new DALL-E 3, describing it as “a wake-up call for visual artists” in terms of its advanced capabilities. The article naturally touches on the current situation regarding copyright for these creations:

In the United States, purely AI-generated art cannot currently be copyrighted and exists in the public domain. It’s not cut and dried, though, because the US Copyright Office has supported the idea of allowing copyright protection for AI-generated artwork that has been appreciably altered by humans or incorporated into a larger work.

The article goes on to explore an interesting aspect of that situation:

there’s suddenly a huge new pool of public domain media to work with, and it’s often “open source”—as in, many people share the prompts and recipes used to create the artworks so that others can replicate and build on them. That spirit of sharing has been behind the popularity of the Midjourney community on Discord, for example, where people typically freely see each other’s prompts.

When several mesmerizing AI-generated spiral images went viral in September, the AI art community on Reddit quickly built off of the trend since the originator detailed his workflow publicly. People created their own variations and simplified the tools used in creating the optical illusions. It was a good example of what the future of an “open source creative media” or “open source generative media” landscape might look like (to play with a few terms).

There are two important points there. First, that the current, admittedly tentative, status of generative AI creations as being outside the copyright system means that many of them, perhaps most, are available for anyone to use in any way. Generative AI could drive a massive expansion of the public domain, acting as a welcome antidote to constant attempts to enclose the public domain by re-imposing copyright on older works – for example, as attempted by galleries and museums.

The second point is that without the shackles of copyright, these creations can form the basis of collaborative works among artists willing to embrace that approach, and to work with this new technology in new ways. That’s a really exciting possibility that has been hard to implement without recourse to legal approaches like Creative Commons. Although the intention there is laudable, most people don’t really want to worry about the finer points of licensing – not least out of fear that they might get it wrong, and be sued by the famously litigious copyright industry.

A situation in which generative AI creations are unequivocally in the public domain could unleash a flood of pent-up creativity. Unfortunately, as the Ars Technica article rightly points out, the status of AI generated artworks is already slightly unclear. We can expect the copyright world to push hard to exploit that opening, and to demand that everything created by computers should be locked down under copyright for decades, just as human inspiration generally is from the moment it is in a fixed form. Artists should enjoy this new freedom to explore and build on generative AI images while they can – it may not last.

Source: Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It | Techdirt

Magic: The Gathering Bans the Use of Generative AI in ‘Final’ Products – Wizards of the Coast cancelled themselves

[…] a D&D artist confirmed they had used generative AI programs to finish several pieces of art included in the sourcebook Glory of the Giants—saw Wizards of the Coast publicly ban the use of AI tools in the process of creating art for the venerable TTRPG. Now, the publisher is making that clearer for its other wildly successful game in Magic: The Gathering.

Update 12/19 11.20PM ET: This post has been updated to include clarification from Wizards of the Coast regarding the extent of guidelines for creatives working with Magic and D&D and the use of Generative A.I.

“For 30 years, Magic: The Gathering has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn’t changing,” a new statement shared by Wizards of the Coast on Daily MTG begins. “Our internal guidelines remain the same with regard to artificial intelligence tools: We require artists, writers, and creatives contributing to the Magic TCG to refrain from using AI generative tools to create final Magic products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes Magic great.”

[…]

The Magic statement also comes in the wake of major layoffs at Wizard’s parent company Hasbro. Last week the Wall Street Journal reported that Hasbro plans to lay off 1,100 staff over the next six months across its divisions in a series of cost-cutting measures, with many creatives across Wizard’s D&D and Magic teams confirming they were part of the layoffs. Just this week, the company faced backlash for opening a position for a Digital Artist at Wizards of the Coast in the wake of the job cuts, which totaled roughly a fifth of the Hasbro’s current workforce across all of its divisions.

The job description specifically highlights that the role includes having to “refine and modify illustrative artwork for print and digital media through retouching, color correction, adjusting ink density, re-sizing, cropping, generating clipping paths, and hand-brushing spot plate masks,” as well as “use… digital retouching wizardry to extend cropped characters and adjust visual elements due to legal and art direction requirements,” which critics suggested carried the implication that the role would involve iterating on and polishing art created through generative AI. Whether or not this will be the case considering Wizards’ now-publicized stance remains to be seen.

Source: Magic: The Gathering Formally Bans the Use of Generative AI in ‘Final’ Products

The Gawker company is very anti AI and keeps mentioning backlash. It’s quite funny that if you look at the supposed “backlash” – they are mostly about the lack of quality control around said art – in as much as people thought the points raised were valid at all (source: twitter page with original disclosure). It’s a kind of cancel culture cave-in, where a minority gets to play the role of judge, jury and executioner and the person being cancelled actually… listens the the canceller with no actual evidence of their crime being presented or weighed independently.

Nissan 300ZX Owner Turns Ford Digital Dash Into Wicked Retro Display – why don’t all automakers allow digital dash theming?!

You’ve got to love a project with amazing elements of both art and science. Nissan 300ZX enthusiast and talented tinkerer Kelvin Elsner has been working on this custom vaporwave-aesthetic digital gauge cluster for months. It’s not in a car yet, but it’s an amazing design and computer coding feat for one guy in his home shop.

<em><a href="https://www.youtube.com/@BlitzenDesignLab">Blitzen Design Lab</a>/YouTube</em>

Blitzen Design Lab/YouTube

Elsner and I are in at least one of the same Z31 groups (that’s the chassis code for the ’80s 300ZX) on Facebook and every once in a while over the last few years, he’s dropped an update on his quest to make a unique, modern, digital gauge cluster for his Z car. This week, he dropped a cute video with a great overview of his project which made me realize just how complex this undertaking has been. It even made its way to another car site before I had a chance to write it up (nice grab, Lewin)!

Anyway, Elsner here has taken a digital gauge cluster from a modern Ford, reprogrammed it, designed a super cool physical overlay for it, and set it up to be an incredibly cool retro-futuristic upgrade for his 300ZX. Not only that, but he worked out a security-encoded ignition key and retrofitted a power mirror-tilt control to act as a controller for the screen! Watch how he did it here:

The pacing of this video is more mellow than what usually goes viral on YouTube, which is another reason why I like it so much. I strongly recommend sitting down for an earnest end-to-end watch.

The Z31 famously had an optional digital dash when it was new, but “digital” by ’80s standards was more like a calculator display. Elsner’s system retains the vaporwave caricature aesthetic leveraging the modern, crisp resolution of a Ford Explorer gauge cluster. The 3D overlay is really what brings it home for me, though.

Here's what the factory Z31 digi-dash looks like. It's pretty cool in its own right. <em><a href="https://www.youtube.com/@michaelsmotorcars8916">Michael's Motor Cars</a>/YouTube</em>

Here’s what the factory Z31 digi-dash looks like. It’s pretty cool in its own right. Michael’s Motor Cars/YouTube

You can add all the colors and animations you want, but that physical depth is what makes a gauge cluster visually interesting and distinctive. Take note, automakers.

I shot Elsner some messages on Facebook about his project. I’m grateful to say he replied, so I can share some elaborations on what he presented in the video. I’ll trim and paraphrase the details he shared.

He’s not an automotive engineer by trade, considers this project a hobby, and doesn’t currently have any plans for mass production or marketing for sale.

As far as the time investment, the first pictures of the project go far as back as 2019. “Time-wise I’d say it’s at least a good few months worth of work but it was spread out over a couple years, I only really had spare time in the evenings and definitely worked on it off and on,” Elsner wrote me on Facebook Messenger. And of course, it’s not running in a car yet, so we can’t quite say the mission is complete.

The part of this project I understand the least is how the display was hacked to show this cool synthwave sunset and move the gauges around. I’ll drop Elsner’s quote about firmware here wholesale so I don’t incorrectly paraphrase:

“The firmware stuff I stumbled on when I was researching how to get the cluster to work—you could get this cluster in Mondeos, but not in the Fusion in North America. It turns out a lot of people were swapping them in, and in the forums I was browsing I found that some folks had some modified software with pictures of their cars added into them.

“I was on a hunt for a while trying to figure out how to do the same, and I eventually came across a post in a Facebook group where some folks were discussing the subject, and someone finally made mention and linked to the software that was able to unpack the firmware graphics.

“This was called PimpMyFord, and then I used Forscan (another program that can be used to adjust module configurations on Ford models) to upload the firmware.”

Elsner used this Ford mirror control as a joystick, or mouse, so a user can cycle through menus. <em><a href="https://www.youtube.com/@BlitzenDesignLab">Blitzen Design Lab</a>/YouTube</em>

Elsner used this Ford mirror control as a joystick, or mouse, so a user can cycle through menus. Blitzen Design Lab/YouTube

Another question I had after watching the video was—how the heck was this modern Ford gauge cluster going to interpret information from the sensors and senders in an ’80s Nissan? The Z31 I used to own had a cable-driven speedometer and a dang miniature phonograph to play the “door is open” warnings. Seems like translating those signals would be a little more involved than a USB to micro-USB adapter. I asked about that and Elsner added more detail:

“On the custom board I made, I have some microcontrollers that read the analog voltages and signals that were originally provided to the stock cluster, and they convert those readings into digital data. This is then used to construct canbus messages that imitate the original Ford ones, which are fed to the Ford cluster through an onboard transceiver … So as far as the cluster is concerned, it’s still connected to an Explorer that just has some weird things to say,” he wrote.

Here I am thinking I’m Tony Stark when I hack up a bit of square stock to make a fog light bracket, while this dude is creating a completely bespoke human-machine interface that looks cool enough to be a big-budget movie prop.

With the extinction of combustion engines looming as a near-future possibility, it’s easy to be cynical about the future of cars as a hobby. But projects like this get me fired up and optimistic that there’s still uncharted territory for creativity to thrive in car customization.

Check out Kelvin Elsner’s YouTube channel Blitzen Design Lab—he’s clearly up to some really cool stuff and I can’t wait to see what he comes up with next.

Source: Nissan 300ZX Owner Turns Ford Digital Dash Into Wicked Retro Display

Library of Babel Online – all books ever written or ever to be written, all images ever created or ever to be created can be found here

The Library of Babel is a place for scholars to do research, for artists and writers to seek inspiration, for anyone with curiosity or a sense of humor to reflect on the weirdness of existence – in short, it’s just like any other library. If completed, it would contain every possible combination of 1,312,000 characters, including lower case letters, space, comma, and period. Thus, it would contain every book that ever has been written, and every book that ever could be – including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on. At present it contains all possible pages of 3200 characters, about 104677 books.

Since I imagine the question will present itself in some visitors’ minds (a certain amount of distrust of the virtual is inevitable) I’ll head off any doubts: any text you find in any location of the library will be in the same place in perpetuity. We do not simply generate and store books as they are requested – in fact, the storage demands would make that impossible. Every possible permutation of letters is accessible at this very moment in one of the library’s books, only awaiting its discovery. We encourage those who find strange concatenations among the variations of letters to write about their discoveries in the forum, so future generations may benefit from their research.

Source: About the Library

Black 4.0 Is The New Ultrablack paint

Vantablack is a special coating material, moreso than a paint. It’s well-known as one of the blackest possible coatings around, capable of absorbing almost all visible light in its nanotube complex structure. However, it’s complicated to apply, delicate, and not readily available, especially to those in the art world.

It was these drawbacks that led Stuart Semple to create his own incredibly black paint. Over the years, he’s refined the formula and improved its performance, steadily building a greater product available to all. His latest effort is Black 4.0, and it’s promising to be the black paint to dominate all others.

 

Back in Black

This journey began in a wonderfully spiteful fashion. Upon hearing that one Anish Kapoor had secured exclusive rights to be the sole artistic user of Vantablack, he determined that something had to be done. Seven years ago, he set out to create his own ultra black paint that would far outperform conventional black paints on the market. Since his first release, he’s been delivering black paints that suck in more light and just simply look blacker than anything else out there.

Black 4.0 has upped the ante to a new level. Speaking to Hackaday, Semple explained the performance of the new paint, being sold through his Culture Hustle website. “Black 4.0 absorbs an astonishing 99.95% of visible light which is about as close to full light absorption as you’ll ever get in a paint,” said Semple. He notes this outperforms Vantablack’s S-Vis spray on product which only achieves 99.8%, as did his previous Black 3.0 paint. Those numbers are impressive, and we’d dearly love to see the new paint put to the test against other options in the ultra black market.

It might sound like mere fractional percentages, but it makes a difference. In sample tests, the new paint is more capable of fun visual effects since it absorbs yet more light. Under indoor lighting conditions, an item coated in Black 4.0 can appear to have no surface texture at all, looking to be a near-featureless black hole. Place an object covered in Black 4.0 on a surface coated in the same, and it virtually disappears. All the usual reflections and shadows that help us understand 3D geometry simply get sucked into the overwhelming blackness.

Black 4.0 compared to a typical black acrylic art paint. Credit: Stuart Semple

Beyond its greater light absorption, the paint has also seen a usability upgrade over Semple’s past releases. For many use cases, a single coat is all that’s needed. “It feels much nicer to use, it’s much more stable, more durable, and obviously much blacker,” he says, adding “The 3.0 would occasionally separate and on rare occasions collect little salt crystals at the surface, that’s all gone now.”

The added performance comes down to a new formulation of the paint’s “super-base” resin, which carries the pigment and mattifying compounds that give the paint its rich, dreamy darkness. It’s seen a few ingredient substitutions compared to previous versions, but a process change also went a long way to creating an improved product. “The interesting thing is that although all that helped, it was the process we used to make the paint that gave us the breakthrough, the order we add things, the way we mix them, and the temperature,” Semple told Hackaday.

The ultra black paint has a way of making geometry disappear. Credit: Stuart Semple

Black 4.0 is more robust than previous iterations, but it’s still probably not up to a full-time life out in the elements, says Semple. You could certainly coat a car in it, for example, but it probably wouldn’t hold up in the long term. He’s particularly excited for applications in astronomy and photography, where the extremely black paint can help catch light leaks and improve the performance of telescopes and cameras. It’s also perfect for creating an ultra black photographic backdrop, too.

No special application methods are required; Black 4.0 can be brush painted just like its predecessors. Indeed, it absorbs so much light that you probably don’t need to worry as much about brush marks as you usually would. Other methods, like using rollers or airbrushes, are perfectly fine, too.

Creating such a high-performance black paint didn’t come without challenges, either. Along the way, Semple contended with canisters of paint exploding, legal threats from others in the market, and one of the main scientists leaving the project. Wrangling supplies of weird and wonderful ingredients was understandably difficult, too.  Nonetheless, he persevered, and has now managed to bring the first batches to market.

The first batches ship in November, so if you’re eager to get some of the dark stuff, you’d better move quick. It doesn’t come cheap, but you’re always going to pay more for something claiming to be the world’s best. If you’ve got big plans, fear not—this time out, Semple will sell the paint in huge bulk 1 liter and 6 liter containers if you really need a job lot. Have fun out there, and if you do something radical, you know who to tell about it.

Source: Black 4.0 Is The New Ultrablack | Hackaday

Posted in Art

Judge dismisses most of artists’ AI copyright lawsuits against Midjourney, Stability AI

judge in California federal court on Monday trimmed a lawsuit by visual artists who accuse Stability AI, Midjourney and DeviantArt of misusing their copyrighted work in connection with the companies’ generative artificial intelligence systems.

U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. The judge said the artists could file an amended complaint against the two companies, whose systems utilize Stability’s Stable Diffusion text-to-image technology.

Orrick also dismissed McKernan and Ortiz’s copyright infringement claims entirely. The judge allowed Andersen to continue pursuing her key claim that Stability’s alleged use of her work to train Stable Diffusion infringed her copyrights.

The same allegation is at the heart of other lawsuits brought by artists, authors and other copyright owners against generative AI companies.

“Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture,” Orrick said.

The artists’ attorneys Joseph Saveri and Matthew Butterick said in a statement that their “core claim” survived, and that they were confident that they could address the court’s concerns about their other claims in an amended complaint to be filed next month.

A spokesperson for Stability declined to comment on the decision. Representatives for Midjourney and DeviantArt did not immediately respond to requests for comment.

The artists said in their January complaint that Stability used billions of images “scraped” from the internet, including theirs, without permission to teach Stable Diffusion to create its own images.

Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists’ copyrights. He allowed the claims to be amended but said he was “not convinced” that allegations based on the systems’ output could survive without showing that the images were substantially similar to the artists’ work.

The judge also dismissed other claims from the artists, including that the companies violated their publicity rights and competed with them unfairly, with permission to refile.

Orrick dismissed McKernan and Ortiz’s copyright claims because they had not registered their images with the U.S. Copyright Office, a requirement for bringing a copyright lawsuit.

The case is Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.

For the artists: Joseph Saveri of Joseph Saveri Law Firm; and Matthew Butterick

For Stability: Paul Schoenhard of Fried Frank Harris Shriver & Jacobson

For Midjourney: Angela Dunning of Cleary Gottlieb Steen & Hamilton

For DeviantArt: Andy Gass of Latham & Watkins

Read more:

Lawsuits accuse AI content creators of misusing copyrighted work

AI companies ask U.S. court to dismiss artists’ copyright lawsuit

US judge finds flaws in artists’ lawsuit against AI companies

Source: Judge pares down artists’ AI copyright lawsuit against Midjourney, Stability AI | Reuters

These suits are absolute nonsense. It’s like suing a person for having seen some art and made something a bit like it. It’s not very surprising that this has been wiped off the table.

Adobe previews AI upscaling to make blurry videos and GIFs look fresh

Adobe has developed an experimental AI-powered upscaling tool that greatly improves the quality of low-resolution GIFs and video footage. This isn’t a fully-fledged app or feature yet, and it’s not yet available for beta testing, but if the demonstrations seen by The Verge are anything to go by then it has some serious potential.

Adobe’s “Project Res-Up” uses diffusion-based upsampling technology (a class of generative AI that generates new data based on the data it’s trained on) to increase video resolution while simultaneously improving sharpness and detail.

In a side-by-side comparison that shows how the tool can upscale video resolution, Adobe took a clip from The Red House (1947) and upscaled it from 480 x 360 to 1280 x 960, increasing the total pixel count by 675 percent. The resulting footage was much sharper, with the AI removing most of the blurriness and even adding in new details like hair strands and highlights. The results still carried a slightly unnatural look (as many AI video and images do) but given the low initial video quality, it’s still an impressive leap compared to the upscaling on Nvidia’s TV Shield or Microsoft’s Video Super Resolution.

The footage below provided by Adobe matches what I saw in the live demonstration:

A clip from a black and white movie called The Red House (1947) featuring a young man and woman.
[Left: original, Right: upscaled] Running this clip from The Red House (1947) through Project Res-Up removes most of the blur and makes details like the character’s hair and eyes much sharper.Image: The Red House (1947) / United Artists / Adobe

Another demonstration showed a video being cropped to focus on a baby elephant, with the upscaling tool similarly boosting the low-resolution crop and eradicating most of the blur while also adding little details like skin wrinkles. It really does look as though the tool is sharpening low-contrast details that can’t be seen in the original footage. Impressively, the artificial wrinkles move naturally with the animal without looking overly artificial. Adobe also showed Project Res-Up upscaling GIFs to breathe some new life into memes you haven’t used since the days of MySpace.

A side-by-side comparison of baby elephant video footage.
[Left: original, Right: upscaled] Additional texture has been applied to this baby elephant to make the upscaled footage appear more natural and lifelike.Image: Adobe

The project will be revealed during the “Sneaks” section of the Adobe Max event later today, which the creative software giant uses to showcase future technologies and ideas that could potentially join Adobe’s product lineup. That means you won’t be able to try out Project Res-Up on your old family videos (yet) but its capabilities could eventually make their way into popular editing apps like Adobe Premiere Pro or Express. Previous Adobe Sneaks have since been released as apps and features, like Adobe Fresco and Photoshop’s content-aware tool.

Source: Adobe previews AI upscaling to make blurry videos and GIFs look fresh – The Verge

Cursed AI | Ken Loach’s 1977 film ‘Star Wars Episode IV – No Hope’

Ken Loach’s 1977 film ‘Star Wars Episode IV – No Hope’.
George Lucas was unhappy with Loach’s depressing subject matter combined with there being no actual space scenes (with all the action taking place on a UK council estate).
He immediately halted filming, recast many parts (Carrie Fisher replacing Kathy Burke for example), did extensive reshoots, and released his more family-friendly cut under new name ‘A New Hope’ (whatever that means!!)
The pair haven’t spoken since 😞
May be an image of 2 people
No photo description available.
May be an image of 1 person
[…] (25 more in the gallery)

Source: Cursed AI | Ken Loach’s 1977 film ‘Star Wars Episode IV – No Hope’ | Facebook

E-Paper News Feed Illustrates The Headlines With AI-Generated Images

It’s hard to read the headlines today without feeling like the world couldn’t possibly get much worse. And then tomorrow rolls around, and a fresh set of headlines puts the lie to that thought. On a macro level, there’s not much that you can do about that, but on a personal level, illustrating your news feed with mostly wrong, AI-generated images might take the edge off things a little.

Let us explain. [Roy van der Veen] liked the idea of an e-paper display newsfeed, but the crushing weight of the headlines was a little too much to bear. To lighten things up, he decided to employ Stable Diffusion to illustrate his feed, displaying both the headline and a generated image on a 7.3″ Inky 7-color e-paper display. Every five hours, a script running on a Raspberry Pi Zero 2W fetches a headline from a random source — we’re pleased the list includes Hackaday — and composes a prompt for Stable Diffusion based on the headline, adding on a randomly selected prefix and suffix to spice things up. For example, a prompt might look like, “Gothic painting of (Driving a Motor with an Audio Amp Chip). Gloomy, dramatic, stunning, dreamy.” You can imagine the results.

We have to say, from the examples [Roy] shows, the idea pretty much works — sometimes the images are so far off the mark that just figuring out how Stable Diffusion came up with them is enough to soften the blow. We’d have preferred if the news of the floods in Libya had been buffered by a slightly less dismal scene, but finding out that what was thought to be a “ritual mass murder” was really only a yoga class was certainly heartening.

Source: E-Paper News Feed Illustrates The Headlines With AI-Generated Images | Hackaday

WhisperFrame Depicts Your Conversations

At this point, you gotta figure that you’re at least being listened to almost everywhere you go, whether it be a home assistant or your very own phone. So why not roll with the punches and turn lemons into something like a still life of lemons that’s a bit wonky? What we mean is, why not take our conversations and use AI to turn them into art? That’s the idea behind this next-generation digital photo frame created by [TheMorehavoc].
Essentially, it uses a Raspberry Pi and a Respeaker four-mic array to listen to conversations in the room. It listens and records 15-20 seconds of audio, and sends that to the OpenWhisper API to generate a transcript.
This repeats until five minutes of audio is collected, then the entire transcript is sent through GPT-4 to extract an image prompt from a single topic in the conversation. Then, that prompt is shipped off to Stable Diffusion to get an image to be displayed on the screen. As you can imagine, the images generated run the gamut from really weird to really awesome.

The natural lulls in conversation presented a bit of a problem in that the transcription was still generating during silences, presumably because of ambient noise. The answer was in voice activity detection software that gives a probability that a voice is present.

Naturally, people were curious about the prompts for the images, so [TheMorehavoc] made a little gallery sign with a MagTag that uses Adafruit.io as the MQTT broker. Build video is up after the break, and you can check out the images here (warning, some are NSFW).

 

Source: WhisperFrame Depicts The Art Of Conversation | Hackaday

What? AI-Generated Art Banned from Future Dungeons & Dragons Books After “Fan Uproar” (Or ~1600 tweets about it)

A Dungeons & Dragons expansion book included AI-generated artwork. Fans on Twitter spotted it before the book was even released (noting, among other things, a wolf with human feet). An embarrassed representative for Wizards of the Coast then tweeted out an announcement about new guidelines stating explicitly that “artists must refrain from using AI art generation as part of their creation process for developing D&D art.” GeekWire reports: The artist in question, Ilya Shkipin, is a California-based painter, illustrator, and operator of an NFT marketplace, who has worked on projects for Renton, Wash.-based Wizards of the Coast since 2014. Shkipin took to Twitter himself on Friday, and acknowledged in several now-deleted tweets that he’d used AI tools to “polish” several original illustrations and concept sketches. As of Saturday morning, Shkipin had taken down his original tweets and announced that the illustrations for Glory of the Giants are “going to be reworked…”

While the physical book won’t be out until August 15, the e-book is available now from Wizards’ D&D Beyond digital storefront.
Wizards of the Coast emphasized this won’t happen again. About this particular incident, they noted “We have worked with this artist since 2014 and he’s put years of work into books we all love. While we weren’t aware of the artist’s choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards’ work moving forward.”

GeekWire adds that the latest D&D video game, Baldur’s Gate 3, “went into its full launch period on Tuesday. Based on metrics such as its player population on Steam, BG3 has been an immediate success, with a high of over 709,000 people playing it concurrently on Saturday afternoon.”

Source: AI-Generated Art Banned from Future ‘Dungeons & Dragons’ Books After Fan Uproar – Slashdot

Really? 1600 tweets about this is considered an “uproar” and was enough to change policy into anti-AI? So if you actually look at the pictures, only the wolf with human feet was strange and the rest of the comments weren’t in my eyes. Welcome to life – we have AI’s now and people are going to use them. They are going to save artists loads of time and allow them to create really really cool stuff… like these pictures!

Come on Wizards of the Coast, don’t be luddites.

Redditor creates working anime QR codes using Stable Diffusion

On Tuesday, a Reddit user named “nhciao” posted a series of artistic QR codes created using the Stable Diffusion AI image-synthesis model that can still be read as functional QR codes by smartphone camera apps. The functional pieces reflect artistic styles in anime and Asian art.

QR codes, short for Quick Response codes, are two-dimensional barcodes initially designed for the automotive industry in Japan. These codes have since found wide-ranging applications in various fields including advertising, product tracking, and digital payments, thanks to their ability to store a substantial amount of data. When scanned using a smartphone or a dedicated QR code scanner, the encoded information (which can be text, a website URL, or other data) is quickly accessed and displayed.

In this case, despite the presence of intricate AI-generated designs and patterns in the images created by nhciao, we’ve found that smartphone camera apps on both iPhone and Android are still able to read these as functional QR codes. If you have trouble reading them, try backing your camera farther away from the images.

Stable Diffusion is an AI-powered image-synthesis model released last year that can generate images based on text descriptions. It can also transform existing images using a technique called “img2img.” The creator did not detail the exact technique used to create the novel codes in English, but based on this blog post and the title of the Reddit post (“ControlNet for QR Code”), they apparently trained several custom Stable Diffusion ControlNet models (plus LoRA fine tunings) that have been conditioned to create different-styled results. Next, they fed existing QR codes into the Stable Diffusion AI image generator and used ControlNet to maintain the QR code’s data positioning despite synthesizing an image around it, likely using a written prompt.

Other techniques exist to make artistic-looking QR codes by manipulating the positions of dots within the codes to make meaningful patterns that can still be read. In this case, Stable Diffusion is not only controlling dot positions but also blending picture details to match the QR code.

This interesting use of Stable Diffusion is possible because of the innate error correction feature built into QR codes. This error correction capability allows a certain percentage of the QR code’s data to be restored if it’s damaged or obscured, permitting a level of modification without making the code unreadable.

In typical QR codes, this error correction feature serves to recover information if part of the code is damaged or dirty. But in nhciao’s case, it has been leveraged to blend creativity with utility. Stable Diffusion added unique artistic touches to the QR codes without compromising their functionality.

An AI-generated image that still functions as a working QR code.
Enlarge / An AI-generated image that still functions as a working QR code.

The codes in the examples seen here all point to a URL for qrbtf.com, a QR code-generator website likely run by nhciao based on their previous Reddit posts from years past. The technique could technically work with any QR code, although someone on the Reddit thread said that it may work best for shorter URLs due to how QR codes encode data.

This discovery opens up new possibilities for both digital art and marketing. Ordinary black-and-white QR codes could be turned into unique pieces of art, enhancing their aesthetic appeal. The positive reaction to nhciao’s experiment on social media may spark a new era in which QR codes are not just tools of convenience but also interesting and complex works of art.

Source: Redditor creates working anime QR codes using Stable Diffusion | Ars Technica

Posted in Art

The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’, apparently made by blind judges

The Supreme Court has ruled that Andy Warhol has infringed on the copyright of Lynn Goldsmith, the photographer who took the image that he used for his famous silkscreen of the musician Prince. Goldsmith won the justices over 7-2, disagreeing with Warhol’s camp that his work was transformative enough to prevent any copyright claims. In the majority opinion written by Justice Sonia Sotomayor, she noted that “Goldsmith’s original works, like those of other photographers, are entitled to copyright protection, even against famous artists.”

Goldsmith’s story goes as far back as 1984, when Vanity Fair licensed her Prince photo for use as an artist reference. The photographer received $400 for a one-time use of her photograph, which Warhol then used as the basis for a silkscreen that the magazine published. Warhol then created 15 additional works based on her photo, one of which was sold to Condé Nast for another magazine story about Prince. The Andy Warhol Foundation (AWF) — the artist had passed away by then — got $10,000 it, while Goldsmith didn’t get anything.

Typically, the use of copyrighted material for a limited and “transformative” purpose without the copyright holder’s permission falls under “fair use.” But what passes as “transformative” use can be vague, and that vagueness has led to numerous lawsuits. In this particular case, the court has decided that adding “some new expression, meaning or message” to the photograph does not constitute “transformative use.” Sotomayor said Goldsmith’s photo and Warhol’s silkscreen serve “substantially the same purpose.”

Indeed, the decision could have far ranging implications for fair use and could influence future cases on what constitutes as transformative work. Especially now that we’re living in the era of content creators who could be taking inspiration from existing music and art. As CNN reports, Justice Elena Kagan strongly disagreed with her fellow justices, arguing that the decision would stifle creativity. She said the justices mostly just cared about the commercial purpose of the work and did not consider that the photograph and the silkscreen have different “aesthetic characteristics” and did not “convey the same meaning.”

“Both Congress and the courts have long recognized that an overly stringent copyright regime actually stifles creativity by preventing artists from building on the works of others. [The decision will] impede new art and music and literature, [and it will] thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer,” she wrote.

The justices who wrote the majority opinion, however, believe that it “will not impoverish our world to require AWF to pay Goldsmith a fraction of the proceeds from its reuse of her copyrighted work. Recall, payments like these are incentives for artists to create original works in the first place.”

Source: The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’

Well, the two pictures are above. How you can argue that they are the same thing is quite beyond me.

Need To Pick Objects Out Of Images? Segment Anything Does Exactly That

Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.

The online demo has a bank of examples, but also works with uploaded images.

The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.

Once an image is segmented, those masks can be used to interface with other systems like object detection (which identifies and labels what an object is) and other computer vision applications. Such system work more robustly if they already know where to look, after all. This blog post from Meta AI goes into some additional detail about what’s possible with SAM, and fuller details are in the research paper.

Systems like this rely on quality datasets. Of course, nothing beats a great collection of real-world data but we’ve also seen that it’s possible to machine-generate data that never actually existed, and get useful results.

Source: Need To Pick Objects Out Of Images? Segment Anything Does Exactly That | Hackaday

Gen-2 by Runway text to Video AI

No lights. No camera. All action.Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.

Visit the page for examples

Source: Gen-2 by Runway

Runway also provided Stable Diffusion, the picture generator