Researcher (Safety)
Amigo
Posted on May 13, 2025
Researcher (Safety)
To apply, send us your resume and anything else you'd like to careers@amigo.ai
About Amigo
We're helping enterprises build autonomous agents that reliably deliver specialized, complex services—healthcare, legal, and education—with practical precision and human-like judgment. Our mission is to build safe, reliable AI agents that organizations can genuinely depend on. We believe superhuman level agents will become an integral part of our economy over the next decade, and we've developed our own agent architecture to solve the fundamental trust problem in AI. Learn more here.
Role
As a Researcher in Safety at Amigo, you'll develop advanced safety systems that ensure our agents operate reliably and align with human values and enterprise objectives. Working as part of our Research team, you'll create architectures that integrate safety into the core Memory-Knowledge-Reasoning (M-K-R) cycle rather than applying isolated filters. Your work will focus on building contextual safety mechanisms that adapt to specific situations while maintaining natural conversational flow. You'll also research how safety systems must evolve as we transition toward neuralese and higher bandwidth reasoning. This research is fundamental to our alignment-first design principle and critical for deploying AI in high-stakes domains.
Responsibilities
Design multi-layered safety architectures that integrate safety at all levels of our agent framework (context graphs, dynamic behaviors, functional memory, etc.)
Develop methods for contextualizing safety rather than applying simple filters, ensuring safety emerges from well-orchestrated M-K-R integration
Research techniques for global guidelines that embed foundational safety protocols at the structural level of context graphs
Create dynamic safety behaviors that adapt precisely to each situation's unique requirements
Develop real-time monitoring systems that analyze conversations for suspicious signals or concerning patterns
Research methodologies for red-lining boundaries that establish clear areas requiring strict protocols or human escalation
Build frameworks for post-processing safety analysis that detects subtle patterns across multiple sessions
Design audit systems with comprehensive trails across all safety mechanisms
Research how safety systems must evolve as we transition toward neuralese capabilities and higher bandwidth reasoning
Systematically measure and benchmark safety across different architectural approaches
Collaborate with other research teams to ensure safety is integrated throughout our platform
Contribute to research publications and the broader field of AI safety and alignment
Qualifications
PhD or equivalent research experience in AI safety, machine learning, or related fields
Strong understanding of alignment challenges in advanced AI systems, particularly in high-stakes domains
Experience designing safety systems, guardrails, or alignment mechanisms for AI
Background in developing frameworks that balance safety with natural conversational flow
Familiarity with post-processing analytics, monitoring, and auditing systems
Understanding of how current architectural limitations affect safety and how future architectures might change this landscape
Strong programming skills with the ability to implement complex safety frameworks
Excellent research and analytical skills with the ability to design and run rigorous experiments
Strong communication skills for explaining complex safety concepts
Passion for ensuring AI systems remain safe and aligned with human values as capabilities increase
Location: NYC (Onsite)
To apply, send us your resume and anything else you'd like to careers@amigo.ai