KAVAK is the number one e-commerce in the purchase and sale of pre-owned vehicles and the private startup with the highest valuation in Latam. Kavak is also the first Unicorn Company in the history of Mexico, but its current value is US$8.7B, that is, almost 9 unicorns together.
Our team is the number one priority. We like to imagine the impossible and work together to achieve it. We are disruptive, proactive, loyal, and ambitious. KAVAK has been operating in Latin America over the last years and has been growing exponentially, our goal is global. We have more than 7,000 Kavakees on the team and growing, and as part of this accelerated growth, we want to make sure we surround ourselves with the best talent to strengthen us to continue building the technology, processes, and products that will allow us to transform the automotive market around the world.
What will be your mission?
- Design, build, and maintain scalable data pipelines and infrastructure to support data-driven initiatives
- Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide solutions to support their work
- Build and maintain data warehouses and data lakes, ensuring data quality and integrity
- Develop and maintain ETL processes to ingest data from various sources
- Build and maintain APIs and services to support data access and consumption
- Perform data modeling and analysis to support business decisions and drive insights
- Monitor and troubleshoot data systems to ensure data availability and performance
- Stay up-to-date with the latest technologies and industry trends in data engineering and analytics
What are we looking for?
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- 2+ years of experience in data engineering or a related field
Strong knowledge of ETLs, database design and SQL
- Strong problem-solving skills and attention to detail
- Strong communication and collaboration skills
- Experience with cloud computing services.
- Experience with distributed computing systems such as Hadoop, Spark, or AWS EMR
- Experience with programming languages such as Python
- Experience with cloud-based data warehousing solutions such as Amazon Redshift or Google BigQuery
Do you want to be part of this story? Apply now!