Researcher, Evals

Cartesia

Cartesia

San Francisco, CA, USA
USD 220k-350k / year + Equity
Posted on Oct 21, 2025

Location

*HQ - San Francisco, CA

Employment Type

Full time

Location Type

On-site

Department

Research

Compensation

  • $220K – $350K • Offers Equity

About Cartesia

Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.

We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.

We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

About the Role

The New Horizons Evaluations team is reimagining how we measure progress in interactive machine intelligence. As Evaluations Lead, you will design evaluation frameworks that capture not just what models know — but how they reason, remember, and interact over time. You’ll work at the intersection of research, product, and infrastructure to develop metrics, systems, and studies that define what “intelligence” means in the next generation of AI. This role is ideal for someone who combines scientific rigor with technical execution, and who’s deeply curious about how people use — and want to use — intelligent systems. Your work will shape how Cartesia builds and evaluates frontier models, ensuring that progress isn’t measured solely by static benchmarks, but by deeper qualities like understanding, naturalness, and adaptability in real-world interaction.

Your Impact

  • Identify and define key model capabilities and behaviors that matter for next-generation model evals

  • Develop and implement new evaluation pipelines with robust statistical analysis and clear reporting

  • Partner closely with model training and research teams to embed evaluation systems directly into model development loops

  • Prototype new user studies and behavioral experiments to ground evaluations in real-world use.

What You Bring

  • Experience designing or implementing evaluation frameworks for generative models (audio, text, or multimodal)

  • Strong technical and analytical skills, ability to take open-ended research ideas and translate them into production-ready systems

  • Creativity in defining novel quantitative metrics for subjective or behavioral qualities

  • Excitement for building evaluation systems that bridge research and real-world use

  • Curiosity and rigor in equal measure; motivation driven by discovering how to measure meaningful progress in intelligent behavior

What sets you apart

  • Understanding of model alignment concepts and evaluation approaches

Our culture

🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together and learning from each other everyday.

🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality and design along the way.

🤝 We support each other. We have an open and inclusive culture that’s focused on giving everyone the resources they need to succeed.

Compensation Range: $220K - $350K