This man used AI to write and illustrate a children’s book in one weekend. He wasn’t prepared for the backlash.

  • Ammaar Reshi wrote and illustrated a children’s book in 72 hours using ChatGPT and Midjourney.
  • The book went viral on Twitter after it was met with intense backlash from artists.
  • Reshi said he respected the artists’ concerns but felt some of the anger was misdirected.

Ammaar Reshi was reading a bedtime story to his friend’s daughter when he decided he wanted to write his own.

Reshi, a product-design manager at a financial-tech company based in San Francisco, told Insider he had little experience in illustration or creative writing, so he turned to AI tools.

In December he used OpenAI’s new chatbot, ChatGPT, to write “Alice and Sparkle,” a story about a girl named Alice who wants to learn about the world of tech, and her robot friend, Sparkle. He then used Midjourney, an AI art generator, to illustrate it.

Just 72 hours later, Reshi self-published his book on Amazon’s digital bookstore. The following day, he had the paperback in his hands, made for free via another Amazon service called KDP.

Front page of Alice and Sparkle, by Ammaar Reshi. An AI generated children's book.
“Alice and Sparkle” was meant to be a gift for his friends’ kids.Ammaar Reshi

He said he paid nothing to create and publish the book, though he was already paying for a $30-a-month Midjourney subscription.

Impressed with the speed and results of his project, Reshi shared the experience in a Twitter thread that attracted more than 2,000 comments and 5,800 retweets.

Reshi said he initially received positive feedback from users praising his creativity. But the next day, the responses were filled with vitriol.

“There was this incredibly passionate response,” Reshi said. “At 4 a.m. I was getting woken up by my phone blowing up every two minutes with a new tweet saying things like, ‘You’re scum’ and ‘We hate you.'”

Reshi said he was shocked by the intensity of the responses for what was supposed to be a gift for the children of some friends. It was only when he started reading through them that he discovered he had landed himself in the middle of a much larger debate.

Artists accused him of theft

Reshi’s book touched a nerve with some artists who argue that AI art generators are stealing their work.

Some artists claim their art has been used to train AI image generators like Midjourney without their permission. Users can enter artists’ names as prompts to generate art in their style.

An update to Lensa AI, a photo-editing tool, went viral on social-media last year after it launched an update that used AI to transform users’ selfies into works of art, leading artists to highlight their concerns about AI programs taking inspiration from their work without permission or payment.

“I had not read up on the issues,” Reshi said. “I realized that Lensa had actually caused this whole thing with that being a very mainstream app. It had spread that debate, and I was just getting a ton of hate for it.”

“I was just shocked, and honestly I didn’t really know how to deal with it,” he said.

Among the nasty messages, Reshi said he found people with reasonable and valid concerns.

“Those are the people I wanted to engage with,” he said. “I wanted a different perspective. I think it’s very easy to be caught up in your bubble in San Francisco and Silicon Valley, where you think this is making leaps, but I wanted to hear from people who thought otherwise.”

After learning more, he added to his Twitter thread saying that artists should be involved in the creation of AI image generators and that their “talent, skill, hard work to get there needs to be respected.”

He said he thinks some of the hate was misdirected at his one-off project, when Midjourney allows users to “generate as much art as they want.”

Reshi’s book was briefly removed from Amazon — he said Amazon paused its sales from January 6 to January 14, citing “suspicious review activity,” which he attributed to the volume of both five- and one-star reviews. He had sold 841 copies before it was removed.

Midjourney’s founder, David Holz, told Insider: “Very few images made on our service are used commercially. It’s almost entirely for personal use.”

He said that data for all AI systems are “sourced from broadly spidering the internet,” and most of the data in Midjourney’s model are “just photos.”

A creative process

Reshi said the project was never about claiming authorship over the book.

“I wouldn’t even call myself the author,” he said. “The AI is essentially the ghostwriter, and the other AI is the illustrator.”

But he did think the process was a creative one. He said he spent hours tweaking the prompts in Midjourney to try and achieve consistent illustrations.

Despite successfully creating an image of his heroine, Alice, to appear throughout the book, he wasn’t able to do the same for her robot friend. He had to use a picture of a different robot each time it appeared.

“It was impossible to get Sparkle the robot to look the same,” he said. “It got to a point where I had to include a line in the book that says Sparkle can turn into all kinds of robot shapes.”

A photo of a page of Alice and Sparkle, by Ammaar Reshi. An AI generated children's book.
Reshi’s children’s book stirred up anger on Twitter.Ammaar Reshi

Some people also attacked the quality of the book’s writing and illustrations.

“The writing is stiff and has no voice whatsoever,” one Amazon reviewer said. “And the art — wow — so bad it hurts. Tangents all over the place, strange fingers on every page, and inconsistencies to the point where it feels like these images are barely a step above random.”

Reshi said he would be hesitant to put out an illustrated book again, but he would like to try other projects with AI.

“I’d use ChatGPT for instance,” he said, saying there seem to be fewer concerns around content ownership than with AI image generators.

The goal of the project was always to gift the book to the two children of his friends, who both liked it, Reshi added.

“It worked with the people I intended, which was great,” he said.

Read the original article on Business Insider

Source: This man used AI to write and illustrate a children’s book in one weekend. He wasn’t prepared for the backlash.

High-powered lasers can be used to steer lightning strikes

[…]

European researchers have successfully tested a system that uses terawatt-level laser pulses to steer lighting toward a 26-foot rod. It’s not limited by its physical height, and can cover much wider areas — in this case, 590 feet — while penetrating clouds and fog.

The design ionizes nitrogen and oxygen molecules, releasing electrons and creating a plasma that conducts electricity. As the laser fires at a very quick 1,000 pulses per second, it’s considerably more likely to intercept lightning as it forms. In the test, conducted between June and September 2021, lightning followed the beam for nearly 197 feet before hitting the rod.

[…]

The University of Glasgow’s Matteo Clerici, who didn’t work on the project, noted to The Journal that the laser in the experiment costs about $2.17 billion dollars. The discoverers also plan to significantly extend the range, to the point where a 33-foot rod would have an effective coverage of 1,640 feet.

[…]

Source: High-powered lasers can be used to steer lightning strikes | Engadget

Quantum Dots / NanoLED Is the Next-Generation Display Technology

[…] Nanosys, a company whose quantum dot technology is in millions of TVs, offered to show me a top-secret prototype of a next-generation display. Not just any next-gen display, but one I’ve been writing about for years and which has the potential to dethrone OLED as the king of displays.

[…]

Electroluminescent quantum dots. These are even more advanced than the quantum dots found in the TVs of today. They could possibly replace LCD and OLED for phones and TVs. They have the potential of improved picture quality, energy savings and manufacturing efficiency. A simpler structure makes these displays theoretically so easy to produce, they could usher in a sci-fi world of inexpensive screens on everything from eyeglasses to windscreens and windows.

[…]

Quantum dots are tiny particles that when supplied with energy emit specific wavelengths of light. Different size quantum dots emit different wavelengths. Or to put it another way, some dots emit red light, others green, and others still, blue.

[…]

For the last few years, quantum dots have been used by TV manufacturers to boost the brightness and color of LCD TVs. The “Q” in QLED TV stands for “quantum.”

[…]

More recently, Samsung combined quantum dots with the incredible contrast ratios of OLED. Their (and partner Sony’s) QD-OLED TVs have some of the best image quality of any TV ever.

[…]

The quantum dots used in display tech up to this point are what’s called “photoluminescent.” They absorb light, then emit light.

[…]

The prototype I saw was completely different. No traditional LEDs and no OLED. Instead of using light to excite quantum dots into emitting light, it uses electricity. Nothing but quantum dots. Electroluminescent, aka direct-view, quantum dots.

[…]

Theoretically, this will mean thinner, more energy-efficient displays. It means displays that can be easier, as in cheaper, to manufacture.

[…]

Nanosys calls this direct-view, electroluminescent quantum dot tech “nanoLED”

[…]

Having what amounts to a simpler display structure, you can incorporate QD-based displays in a wider variety of situations. Or more specifically, on a wider variety of surfaces. Essentially, you can print an entire QD display onto a surface without the heat required by other “printable” tech.

What does this mean? Just about any flat or curved surface could be a screen

[…]

For instance, you could incorporate a screen onto the windshield of a car for a more elaborate, high-resolution, easy-to-see, heads-up display. Speed and navigation directions for sure, but how about augmented reality for safer nighttime driving with QD-display-enhanced lane markers and street signs?

[…]

AR glasses have been a thing, but they’re bulky, low resolution and, to be perfectly honest, lame. A QD display could be printed on the lenses themselves, requiring less elaborate electronics in the frames.

[…]

I think an obvious early use, despite how annoying it could be, would be bus or subway windows. These will initially be pitched by cities as a way to show people important info, but inevitably they’ll be used for advertising. That’s certainly not a knock against the tech, just how things work in the world.

[…]

5-10 years from now we’ll almost certainly have options for QD displays in our phones, probably in our living rooms, and possibly on our windshields and windows

[…]

Source: This Next-Generation Display Technology Is Going to Change the World – CNET