in

Twelve Labs Raises $12 Million Seed Extension and Teams With Oracle to Bring Foundation AI Model for Video Understanding to Market

Twelve Labs, the video search and understanding company, today announced that amid significant product advances, the company has closed a $12 million seed extension round, following its initial seed fundraise last spring. This round brings the company’s total seed funding to $17 million. Radical Ventures led the latest round, with participation from existing investors including early seed lead Index Ventures. New investors include Jeffrey Katzenberg’s WndrCo and Spring Ventures, as well as notable angels Jay Simons (General Partner at Bond and former President of Atlassian), Nicolas Dessaigne (Founder and former CEO of Algolia), and Lukas Biewald (Founder and CEO of Weights & Biases).

Rob Toews, Partner at Radical Ventures, has joined the Twelve Labs board of directors.

“Twelve Labs is opening up entirely new possibilities for the future of video,” said Toews. “The team’s deep commitment to research and ongoing innovation is driving an entire industry forward to efficiently access and fully leverage the 80% of the world’s data that currently resides in video form. Twelve Labs is positioned to build the world’s leading foundation model for videos, forever changing how we interact with this medium.”

In addition to adding product features and conducting ongoing research, Twelve Labs will use the new funds to continue building out the world’s first commercial, multi-billion-parameter scale foundation model for video understanding. This model will tackle numerous video-related tasks extending beyond semantic search, such as video chapterization and summary generation. Similar to how large language models trained on massive text data changed the world in how humans interact with text data, Twelve Labs aspires to fundamentally change how people interact with video data.

A first-of-its-kind cloud-native suite of APIs that enables comprehensive video search, Twelve Labs’ platform views and understands the content of a video, including both visual (action, movement, objects, text, etc.) and audio (non-verbal and conversational) context. It then transforms the video content into a powerful intermediary data format (vectors), so that when a user types in a search query, the Twelve Labs AI automatically outputs the most relevant scenes across hundreds of thousands of hours of video.

Powering the Future of Video Search and Understanding

Since launching to a select group of users last spring, Twelve Labs has secured closed beta customers that have built on top of its APIs across enterprise knowledge, content moderation, media analytics, contextual advertising, e-learning, and more. Through these early use cases, Twelve Labs has proven that unlike many AI models that can only be applied in very narrow, prescriptive use cases, Twelve Labs’ AI was built with general context understanding allowing its models to be applied horizontally with far less effort and time commitment.

Over the last six months, Twelve Labs has more than doubled its video search accuracy, while further reducing latency. Today, search results are delivered in less than one second. This increased performance builds on top of Twelve Labs’ first place finish in the ICCV 2021 VALUE Challenge Video Retrieval Track.

To help ensure that speed and quality are not compromised, Twelve Labs has chosen Oracle to provide the AI cloud infrastructure capacity required to bring its foundation AI model to market. Oracle customized its offerings and Twelve Labs is leveraging hundreds of NVIDIA A100 Tensor Core GPUs with Oracle Cloud Infrastructure (OCI)’s cluster network and storage for the training. OCI’s accelerated infrastructure provides performance, scalability, and cluster networking that provides nearly 1600 Gb/s bandwidth and as low as 1.5 microseconds latency. Working with Oracle, Twelve Labs will be able to rapidly scale and become the de facto solution for video understanding.

“This is a very exciting time for Twelve Labs and the entire AI industry. We are deeply appreciative of the strong support from our early customers and investors, as well as from our cloud partner, Oracle,” said Jae Lee, Twelve Labs’ co-founder and CEO. “With our powerful foundation model for video, we are well positioned to advance video understanding and to unlock the full potential of video and creativity.”

Users can sign up for the Twelve Labs waitlist here. Its public beta will be available later next year.

About Twelve Labs

Twelve Labs make video instantly, intelligently, and easily searchable. Twelve Labs’ state-of-the-art video understanding technology enables the accurate and timely discovery of valuable moments within an organization’s vast sea of videos so that users can do and learn more. The company is backed by leading venture capitalists, AI luminaries and founders of cutting-edge technology companies. It is headquartered in San Francisco, with an APAC office in Seoul. Learn more at twelvelabs.io

PlayV Run Season 2, The Final Match Open

Synatic Secures $2.5 Million in Seed Extension Funding