Machine Learning Engineer
Percepta
Location
New York City
Employment Type
Full time
Location Type
On-site
Department
Product & Engineering
Who we are
Percepta’s mission is to transform critical institutions with applied AI. We care that industries that power the world (e.g. healthcare, manufacturing, energy) benefit from frontier technology.
To make that happen, we embed with industry-leading customers to drive AI transformation. We bring together:
Forward-deployed expertise in engineering, product, and research
Mosaic, our in-house toolkit for rapidly deploying agentic workflows
Strategic partnerships with Anthropic, McKinsey, AWS, companies within the General Catalyst portfolio, and more
Our team is a quickly growing group of Applied AI Engineers, Embedded Product Managers and Researchers motivated by diffusing the promise of AI into improvements we can feel in our day to day lives.
Percepta is a direct partnership with General Catalyst, a global transformation and investment company.
About the role
We’re hiring Machine Learning Engineers who will work directly within customer teams to define and deliver high-impact AI systems. We don't build prototypes or laboratory projects - you’ll design, build, and ship production-grade AI agents and workflows that drive millions in business value for customers.
Our Machine Learning Engineers:
Engineer and optimize AI/ML systems: Build end-to-end ML pipelines for data ingestion, training, evaluation, and deployment. Adapt and extend LLM models with fine-tuning, distillation, retrieval systems, and tool-use to solve domain-specific problems.
Evaluate AI systems rigorously: Develop custom evaluations to ensure models succeed in real-world environments.
Bring frontier methods into practice: Track the latest techniques in areas like RAG, tool use, multi-step agent orchestration, fine-tuning methods, and evaluation frameworks - and apply them to specific customer challenges.
Collaborate across product and research: Partner with research and product teams to turn frontier techniques into production-ready features and workflows.
Advance our core product: Encode the lessons from our customer engagements in our Mosaic product, consistently contributing reusable ML components, infrastructure abstractions, and performance improvements.
What we’re looking for
AI-nativeness: You're excited about the potential for AI to transform businesses and want to play a hands-on role in bringing frontier technology into critical institutions.
Strong ML foundations with hands-on experience building and deploying production models / AI systems.
Being generative and collaborative: You love constantly jamming on new “what if” ideas with teammates and partners to bridge applied engineering, product, and research efforts.
Extreme ownership: You’re willing to jump in and love being the one on the hook. You aren’t going to wait to be pointed at a task—you’re going to identify what you think we should do next, and then do it.
Execution excellence and speed: You can build stuff in messy environments and know how to get code written and shipped quickly. You can hold the balance of speed and quality and know when to push the pace vs. when to slow down.
Customer-obsession and respect: You’re motivated by understanding customer pain points and iterating directly with end users to deliver wins quickly.
Bonus if you have
Hands-on experience with LLM tooling (e.g., LangGraph, Mastra, Agents SDK).
Experience fine-tuning, distilling, and deploying LLMs or other foundation models in production.
Background in retrieval, RAG pipelines, or multi-step agent design (including tool use and human-in-the-loop systems).
Strong engineering foundations in Python/TypeScript, cloud deployment (AWS/GCP/Azure), and modern MLOps/DevOps tooling.
Prior startup or founding engineer experience, balancing craft, ownership, and speed.
We’re working against an incredibly ambitious mission. It won’t be easy, but it will likely be the most fulfilling work of your career. If this excites you, let's chat, even if you don't meet all of the qualifications above.