Senior Data Engineer - SDE 3
ZAGENO
About the Company
ZAGENO offers the largest life sciences lab supply marketplace. Our one-stop shop helps scientists, lab managers, and procurement leaders compare products, source alternatives, track deliveries, and communicate order statuses in real time, accelerating innovation by saving valuable time. Leveraging advanced AI, ZAGENO enhances supply chain resilience and makes the customer experience superior, seamlessly integrating with existing systems to boost productivity and make online shopping for research materials convenient, efficient, and reliable. We are committed to innovation, excellence, and fostering a supportive, and dynamic work environment.
About the Role
ZAGENO is hiring a Senior Data Engineer (5-7 years experience) to design, own, and evolve ZAGENO’s data platform and data-driven pipelines at scale. This role goes beyond implementation and focuses on platform architecture, reliability engineering, and long-term scalability of data systems across the ZAGENO ecosystem.
You will be responsible for end-to-end ownership of data pipelines and the data platform, ensuring high availability, performance, and cost efficiency across development, staging, and production environments. The role requires strong expertise in data platform design, operational excellence, and proactive system improvements, rather than reactive issue handling alone.
In addition to building and maintaining data pipelines, you will play a key role in defining best practices, setting engineering standards, and influencing platform strategy, while collaborating closely with product, analytics, and engineering teams. You will lead efforts around incident prevention, capacity planning, release readiness, and continuous optimization of data operations.
In this role, you will:
- Architect and develop data pipelines to optimize for performance, quality, and scalability
- Collaborate with data, engineering, and business teams to build tools and data marts that
enable analytics
- Develop testing and monitoring to improve data observability and data quality
- Partner with data science to build and deploy predictive models
- Support code versioning, and code deployments for Data Pipelines
About you :
- 5-7 years of experience as a data engineer
- Strong experience with DATALAKE, DATA WAREHOUSE and DELTA TABLES
- Strong SQL skills are required (we currently use Databricks SQL warehouse)
- Strong experience with Spark and big data tech
- Strong hand-on in Pyspark and SQL
- Experience orchestrating and scheduling data pipelines is required
- Expertise with cloud environments (AWS or GCP) is must
- Experience with cloud data warehouses and distributed query engines is a plus
- Proactive and keen attention to detail
Our benefits
- Working for a mission-driven business with a meaningful challenge with a positive impact on the scientific community
- A clear growth perspective
- A learning and development budget to enable your ambitions to grow professionally in your field
- A professional and dynamic team with a global vision and mindset
- An exciting, international working environment - we have more than 40 nationalities!
- We’ve got your health benefits (medical, dental, and vision)
- Hybrid Work with 3 days work from office in our Marathahalli, Bangalore office
- Staying healthy and fit is essential - we cover a part of your gym membership!
- Holidays and flexible PTO
- A budget to improve your home office environment