Financial News

Tavus Raises $40M to Build the Next Frontier of Intelligence: Human Computing

Tavus is bringing sci-fi to life with PALs and the models that power them—emotionally intelligent AI humans that can see, hear, act, and even look like us.

Today, Tavus announced $40 million in Series B funding to build the future of human computing, led by CRV with participation from Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital. This vision takes shape with the launch of PALs: AI humans built by Tavus with emotional intelligence, agentic capabilities, and true multimodality with text, voice, and face-to-face.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20251111507298/en/

Tavus raises $40M Series B led by CRV to build the future of human computing.

Tavus raises $40M Series B led by CRV to build the future of human computing.

Human-computer interfaces haven't fundamentally evolved since the 1980s. We moved from command-line interfaces to graphical user interfaces—from typing commands to clicking buttons. Today's AI chatbots feel like a return to the command-line era: text-based interfaces where humans must spell out every action and instruction. For decades, science fiction promised us something better—Star Trek, Her—computers that could see and hear us, but also look like us, respond with emotion, and feel alive. Tavus is fulfilling this promise by creating AI that makes conversations with computers feel like second nature, just like talking to a friend.

“We've spent decades forcing humans to learn to speak the language of machines,” said Hassaan Raza, CEO of Tavus. “With PALs, we're finally teaching machines to think like humans—to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. It's not about more intelligent AI, it’s about AI that actually meets you where you are.”

Meet the PALs

Tavus launched PALs (Personal Affective Links): Agentic AI humans that see, hear, evolve, remember, and act, just like humans do. Powered by foundational models for rendering, conversational intelligence, and perception, PALs represent the next era of human computing.

PALs are built to communicate the way people do. They maintain a lifelike visual presence, read expressions and gestures, and understand emotion and timing in real time. They remember context, pick up on subtle social cues, and move fluidly between video, voice, and text, so interaction always feels natural. And like humans, they have agency—taking initiative, reaching out, and acting on your behalf to manage calendars, send emails, and follow through without supervision.

For years, computers made us speak their language. PALs finally speak ours, forming genuine connections by learning individual habits, adapting to personality, and improving with every interaction.

The Models Powering PALs

Behind every PAL is a suite of foundational models that teach machines to see, feel, and act the way people do. These proprietary, state-of-the-art systems were built entirely in-house by the Tavus research team to understand and simulate human behavior with unprecedented depth. Each model sets a new standard for realism and intelligence, expanding the boundary of what “human-like” AI can become.

  • Phoenix-4 — A SoTA rendering model that drives lifelike expression, headpose control, and emotion generation at conversational latency.
  • Sparrow-1 — An audio understanding model that uses deep conversational intelligence and audio and semantic-based emotional understanding to manage timing, tone, and intent to adapt in real time to know not just what to say, but when.
  • Raven-1 — A contextual perception model that interprets context, people, environments, emotions, expressions, and gestures, giving PALs a sense of presence and enabling them to see and understand like humans do.

These, paired with a SoTA orchestration and memory management system, bring face-to-face video, speech, text, and agentic capabilities to life, enabling the world’s first AI humans. What makes them powerful isn’t just how they look or talk; it’s that they understand, remember, and act, just as a human would. This is the beginning of computers that finally feel alive.

Get started for free at https://www.tavus.io/

About Tavus

Tavus is a San Francisco-based AI research lab pioneering human computing: the art of teaching machines to be human. Backed by CRV, Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital, Tavus builds foundational models that teach machines to see, hear, respond, and act like people do, pioneering AI humans. The company’s research team brings experience from leading universities and top AI labs, led by researchers specializing in rendering, perception, and affective computing, including Professor Ioannis Patras and Dr. Maja Pantic. Over one hundred thousand developers and enterprises use Tavus to deploy AI for recruiting, sales, education, and customer service.

With PALs, we're finally teaching machines to think like humans—to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. —Hassaan Raza, CEO of Tavus

Contacts

Recent Quotes

View More
Symbol Price Change (%)
AMZN  238.64
-5.56 (-2.28%)
AAPL  273.28
-0.19 (-0.07%)
AMD  249.37
-9.52 (-3.68%)
BAC  52.95
-1.16 (-2.13%)
GOOG  280.15
-7.28 (-2.53%)
META  607.38
-1.62 (-0.27%)
MSFT  503.76
-7.38 (-1.44%)
NVDA  186.96
-6.84 (-3.53%)
ORCL  216.51
-10.48 (-4.62%)
TSLA  404.03
-26.57 (-6.17%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.

Use the myMotherLode.com Keyword Search to go straight to a specific page

Popular Pages

  • Local News
  • US News
  • Weather
  • State News
  • Events
  • Traffic
  • Sports
  • Dining Guide
  • Real Estate
  • Classifieds
  • Financial News
  • Fire Info
Feedback