The Runway is one of the most popular AI generative companies. Its text-to-image video tools recently announced the launch of a new round of capital, adding $141 million to a series C led by Google, Nvidia, Salesforce Ventures, and other investors.
The New York-based company stated in a press statement that it would use the new funding for “further scale in-house research efforts, expand its world-class team, and continue to bring state-of-the-art multi-modal AI systems to market, while building groundbreaking and intuitive product experiences.”
Runway started with a task to build AI for creatives
On March 1, VentureBeat talked to Runway CEO and cofounder Cristobal Valenzuela. He talked about the release of the company’s Gen-2 tool, which is now widely accessible, and the company’s creation four years ago to create AI tools specifically designed for creatives and artists.
“Since then, we’ve been pushing the boundaries of the field and building products on top of that research,” he explained and said Gen-2 is a “big step forward” in the company’s efforts to convert text into video. He cited the company’s millions of customers, ranging from award-winning movie directors to production and advertising businesses to the smallest creators and consumers.
“We’ve built an incredibly tight community that has helped us understand how actually creatives are using generative AI in their work today,” the founder explained, pointing to Runway’s work on the Oscar-winning film All All Over the World Everywhere at once. One of the editors used Runway to aid in effects on a few scenes.
“So we have a lot of folks who have helped us understand how these models are going to be used in the context of storytelling,” he said. “We’re heading to a world where most of the content and media and videos that you consume will be generated, which requires a different type of software and tools to allow you to generate those kind of stories.”
Runway’s popularity is growing as artists fight against generative artificial intelligence
Runway’s efforts occur at a moment when artists are battling artificial intelligence (AI) that is generative. For instance, thousands of screenwriters have been striking for more than two months, stopping numerous television and movie productions because they are seeking restrictions on the application of artificial intelligence.
Additionally, VentureBeat recently revealed that Adobe Stock creators are dissatisfied with the AI-based generative model of Firefly. As per some of the creators, some of whom VentureBeat interviewed, Adobe trained Firefly on their stock photos without prior notice or consent.
There are numerous lawsuits currently pending in the generative AI area. Today, for instance, plaintiffs are suing OpenA, claimingm they used “stolen data” to “train and develop” its products, including ChatGPT 3.5, ChatGPT4, VALLE, and DALLE.
Three cofounders went to art school with their three founders
“We do a lot of listening and are part of the community,” said Valenzuela. He cited the Runway’s AI Film Festival in March as an example of facilitating discussions and understanding the ways these technology techniques can be utilized by professionals who are filmmakers and storytellers.
“I do think there’s confusion around how these algorithms are already being used in creative environments,” he added. “There’s the misconception about … you’re letting the system do everything automatically and that you don’t have any input. We don’t believe it is. We think of these tools as instruments for human enhancement. They’re instruments for increasing creativity. They’re not meant to replace creativity.”
Valenzuela said he came from an artistic background. “I went to art school and I started Runway while I was an artist,” Valenzuela explained. “These are tools I wanted to use.”
Originating from Chile, Valenzuela came to New York City to attend the Tisch School of the Arts at New York University — where he became acquainted with his cofounders Anastasis Germanidis and Alejandro Matamala — but quickly realized that his work was better suited for the development of tools.
“My art was toolmaking, I was eager to see artists using the tools I was making,” the researcher declared. “So I went deep into the rabbit hole of neural networks — the idea of computational creativity.”
When discussing issues related to fair use, copyright, and work replacement, as cited in the works of artists, Valenzuela stated that the world is “very early” in understanding the full implications of regenerative AIe. AI. “We’re really trying to make sure we can drive this conversation to a positive end,” he stated. “I believe listening is the most crucial element. I believe the ability to be open to change and being able adapt and be aware of how things are going to be used these are the main drivers of the way we think about our product. I’m not sure if I can be a spokesperson for other companies or how other companies think about the market however for us, we’ve got the obligation to our customers.”