In the dynamic landscape of technology, generative AI has emerged as the hottest topic, revolutionizing various tasks from generating images based on text prompts to solving complex equations at lightning speed.
Among the notable players in this field is Runway, a specialized generative AI tool for content creation. This tool stands out by effortlessly producing audio, images, videos, and 3D structures with a simple prompt, and the best part is that it’s free to get started with.
Runway’s capabilities extend to converting any image, including those generated on models like Midjourney, into videos using tools like Runway Motion Brush.
The latest addition to its offerings is Runway Gen-2, a multimodal AI system capable of generating images, videos, and videos with text. Adding to the convenience, there’s even an iOS app for Runway, allowing users to create multimedia content on smartphones.
With Runway Gen-2, users can create new videos with straightforward text prompts. Free account holders can generate four-second videos, downloadable and shareable on any platform, although they will bear a watermark.
Each second of video generation consumes five credits, and free users receive 500 credits. For those seeking enhanced tools and capabilities, a subscription plan is $12 per month, offering more customization options for the output.
In a parallel development, Stability AI has introduced Stable Video Diffusion, a cutting-edge AI research tool that transforms any static image into a short video. This open-weight preview of two AI models employs image-to-video techniques and can operate locally on machines with Nvidia GPUs.
Stability AI gained attention last year with Stable Diffusion, an open-weight image synthesis model, inspiring a community of enthusiasts who built on the technology with their adaptations.
Now, Stability aims to replicate this success in AI video synthesis. Stable Video Diffusion comprises two models, “SVD” and “SVD-XT,” producing video synthesis at 14 and 25 frames, respectively. The models can operate at varying speeds and output short MP4 video clips at 576×1024 resolution, typically lasting 2-4 seconds.
Stability underscores that the model is still in its early stages and intended for research purposes only.
While they actively update the models, seeking feedback on safety and quality, the company emphasizes that the current stage is unsuitable for real-world or commercial applications. The insights and feedback received will be crucial in refining the model for future releases.