Scaling Our In-House Research Infrastructure with AWS
Because of this collaboration, we have brought all of our model development and training in-house to accelerate the pace of training and the deployment of new models and products. These efforts are in pursuit of bringing our users best-in-class experiences across our ever-expanding Generative Suite and making professional multimedia creation more accessible.
Runway’s Gen-2, a multimodal AI system that can generate novel videos with text, images, or video clips was trained on AWS in collaboration with NVIDIA. This is a continuation and improvement on our work of multimodal generative models and represents a major improvement to the state-of-the-art AI systems for video generation.
Latest Insights: Is Customer Experience Strategy Making or Breaking Your ‘Shopping Festival’ Sales?
“The pioneering generative models Runway is testing, training, and scaling on AWS are setting the standard for the future of multimodal content,” said Matt Garman, Senior Vice President, AWS Sales, Marketing, and Global Services. “We are excited to power Runway’s innovations and collaborate to forever change how creatives harness AI to realize their vision.”
Gen-2 can realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video) or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all. AWS was instrumental in the development and training of this groundbreaking video generation model, and we look forward to continuing to pioneer what’s possible with Generative AI together. We’re looking forward to scaling training and capacity over the course of this partnership.
AiThority: How AI Can Improve Public Safety
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.