Sr. Data Engineer

Verta

Verta

Data Science
Costa Rica · Remote
Posted on Feb 4, 2026

Business Area:

IT

Seniority Level:

Mid-Senior level

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world’s largest enterprises.

Cloudera Data Engineers are key contributors to Cloudera's Data Platform, responsible for ingesting, curating, and provisioning data to support Business Operations, Analytics, and AI/ML initiatives. In this role, you'll design and implement data pipelines, utilizing a broad array of Big Data and Cloud-based technologies to manage complex data ingestion and transformation workflows with diverse requirements and SLAs. These pipelines support complex machine learning workflows and business intelligence systems.

As a Cloudera Data Engineer, you will deepen your expertise in data ingestion and transformation while ensuring the quality and integrity of data pipelines. You’ll work in an agile development environment, creating flexible and reusable pipelines based on the specifications provided by Data, Business Intelligence, and Operations Architects. Beyond traditional engineering, you will focus on modernizing development workflows and building internal tools that enable the team and business users to interact with data more efficiently.

A successful candidate will contribute to building robust data management processes on Cloudera's native Data Platform. These processes will support internal analytics and serve as models to drive external customer success.

As a Senior Data Engineer you will:

  • Collaborate with Data Architects, Operational Architects, and Data Analysts to understand the data and operational requirements across different business units.

  • Partner with data owners to ensure seamless, reliable data ingestion.

  • Develop and implement data transformations to enrich and provision data, following established specifications and standards.

  • Create real-time, near real-time, and point-in-time data flows to meet the operational demands of business systems.

  • Implement monitoring processes to track data quality and ensure the reliability of data services.

  • Leverage AI-orchestrated development techniques (such as "Vibe Coding") to accelerate the delivery of new data pipelines and reduce the end-to-end development lifecycle.

  • Design and deploy internal self-service tools, including automated documentation generators and natural language interfaces, to empower business users and reduce routine engineering requests.

  • Promote and standardize AI-first engineering workflows to ensure high-quality, auto-validated, and well-documented code delivery across the team.

We are excited if you have (Required Experience):

  • 5+ years of experience as a Data Engineer.

  • Proficient in coding with Python (primary) and SQL, with experience in ETL, Business Intelligence, and data processing.

  • Proven track record of contributing to the architecture and implementation of reliable and scalable data pipelines.

  • Hands-on experience with Distributed Systems and Big Data technologies, including Spark and the Hadoop ecosystem (Hive, Impala, Kafka).

  • Proven proficiency in Data Modeling using industry best practices (e.g., Kimball, Inmon) to ensure data integrity.

  • Hands-on experience or strong proficiency in using AI-assisted coding tools (Copilots/LLMs) to modernize engineering workflows ("Vibe Coding").

  • Ability to monitor critical data pipelines for quality and resolve any issues effectively.

  • Education: Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field.

  • Strong communication skills, both written and verbal.

You may also have:

  • Experience with Apache Airflow or Apache NiFi.

  • Expertise in optimizing data storage using HDFS/Parquet/Avro, Kudu, or HBase.

  • Experience developing data engineering processes to support AI/ML use cases.

  • Familiarity with building automation scripts or interfaces that leverage LLMs for data discovery or documentation

What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-MH1

#LI-REMOTE