The AI startup behind Stable Diffusion is now testing generative video

Stable Diffusion’s generative art can now be animated, developer Stability AI announced. The company has released a new product called Stable Video Diffusion into a research preview, allowing users to create video from a single image. “This state-of-the-art generative AI video model represents a significant step in our journey toward creating models for everyone of every type,” the company wrote.

The new tool has been released in the form of two image-to-video models, each capable of generating 14 to 25 frames long at speeds between 3 and 30 frames per second at 576 × 1024 resolution.

[…]

Stable Video Diffusion is available only for research purposes at this point, not real-world or commercial applications. Potential users can sign up to get on a waitlist for access to an “upcoming web experience featuring a text-to-video interface,” Stability AI wrote. The tool will showcase potential applications in sectors including advertising, education, entertainment and more.

[…]

it has some limitations, the company wrote: it generates relatively short video (less than 4 seconds), lacks perfect photorealism, can’t do camera motion except slow pans, has no text control, can’t generate legible text and may not generate people and faces properly.

The tool was trained on a dataset of millions of videos and then fine-tuned on a smaller set, with Stability AI only saying that it used video that was publicly available for research purposes.

[…]

Source: The AI startup behind Stable Diffusion is now testing generative video

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com