Inference Engineer

Cartesia

Cartesia

Software Engineering
Posted on Dec 14, 2024

About Cartesia

Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.

We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.

We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

Role responsibilities

We're hiring an Inference Engineer to advance our mission of building real-time multimodal intelligence. In this role, you'll:

  • Design and build low latency, scalable, and reliable model inference and serving stack for our cutting edge foundation models using Transformers, SSMs and hybrid models.

  • Work closely with our research team and product engineers to serve our suite of products in a fast, cost-effective, and reliable manner.

  • Design and build robust inference infrastructure and monitoring for our products.

You'll have significant autonomy to shape our products and directly impact how cutting-edge AI is applied across various devices and applications.

What we’re looking for

Given the scale and difficulty of problems we work on, we value strong engineering skills at Cartesia.

  • Strong engineering skills, comfortable navigating complex codebases and monorepos.

  • An eye for craft and writing clean and maintainable code.

  • Experience building large-scale distributed systems with high demands on performance, reliability, and observability.

  • Technical leadership with the ability to execute and deliver zero-to-one results amidst ambiguity.

  • Experience designing best practices and processes for monitoring and scaling large scale production systems.

  • Background in or experience working on inference pipelines with machine learning and generative models.

  • Experience working in CUDA, Triton or similar.

Our culture

🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together and learning from each other everyday.

🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality and design along the way.

🤝 We support each other. We have an open and inclusive culture that’s focused on giving everyone the resources they need to succeed.

Our perks

🍽 Lunch, dinner and snacks at the office.

🏥 Fully covered medical, dental, and vision insurance for employees.

🏦 401(k).

✈️ Relocation and immigration support.

🦖 Your own personal Yoshi.