ML Research Engineer Lead
Martian
About Martian:
Martian is doing for LLMs what Google did for websites. In the early internet, the number of websites was exploding and it was hard to figure out what website you should use for what task. Google fixed that problem by building a search engine that aggregated websites across the internet. A similar problem exists in AI today; the number of models is exploding and it's hard to figure out what model you should use for what task. Martian fixes that problem through a model router: You give us your prompt, we run it on the best model in real time.
We can do this because we've learned how to predict the performance of a model without running it. That lets us find a model which can complete your request with the highest performance and lowest cost. The value proposition is simple: stop worrying about AI, start focusing on product.
That idea -- making it so that people can stop worrying about AI -- is the core of what we do. Model-routing is just the first tool we're building to help understand the way in which models behave. By pioneering techniques like this, we want to solve the most fundamental problem in AI: understanding why models behave the way they do, and creating guarantees they'll behave the way we want.
About the role:
Building a good model-router is technically challenging. It requires understanding and measuring model performance (both through qualitative metrics and human preferences), developing more effective routing systems (better data, new model architectures, query-rewriting, and fundamental research into how models work), and productionizing the tools we build (reducing latency, improving model costs, ML Ops). We're looking for someone to take on those challenges with Martian by building and leading our ML Eng org.
In this role you will:
Work alongside the founders to develop model-routing technology and bring it into production.
Introduce new techniques, tools, and architectures that improve the performance, latency, throughput, and efficiency of our deployed models.
Set up the core infrastructure and best-practices for things like model hosting and training
Build out and help manage the ML Eng team
You'll thrive in this role if you:
Have an understanding of modern ML architectures and an intuition for how to optimize their performance, particularly for inference.
Have deployed deep learning models, and especially LLMs, in production
Have a good understanding of the literature on model evaluation and have trained reward models from human or AI feedback or worked on LLM-as-judges.
Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done.
Have at least 7 years of professional software engineering experience (particularly in an ML Engineering role) or equivalent.
Are self-directed and enjoy figuring out the most important problem to work on.
Have a good intuition for when off-the-shelf solutions will work, and build tools to accelerate your own workflow quickly if they won’t.
What We Offer:
Competitive salary and equity packages
Health, dental, and vision insurance plans
Unlimited PTO
Daily lunch
Team dinners