Senior Software Engineer

inDrive

inDrive

Software Engineering
Cyprus
Posted on Jan 20, 2026

Key Responsibilities

Performance Audits & Integration Support:

Go/No-Go decisions with our stakeholder about launches services before integrating them into our ecosystem and integrations support (audit APIs, web, databases, caches, message brokers)

Design&Execute Load Tests global strategy:
  • Modeling of load testing scenarios (load, stress, soak, peak, find max tests), methodology and formation of non-functional requirements for system using k6, JMeter (DSL framework)
  • Simulate real-world scenarios and traffic patterns to validate systems under production-like conditions (including high RPS, concurrency levels, data skew, and geo-distributed load)

Load Modeling:
  • Calculate and validate detailed load models – e.g. expected RPS, concurrent users, transaction volumes, and data distribution.
  • Adjust these models to mirror peak and edge-case scenarios, ensuring our capacity planning is grounded in reality.

Bottleneck Analysis & Fixes:

Find, analyze, and fix performance bottlenecks. This includes:
  • backend issues (CPU hotspots, memory leaks, GC pauses, thread pool exhaustion)
  • DB inefficiencies (Aurora MySQL/RDS indexing, query locks, IOPS limits)
  • messaging/queue back-pressure (Kafka partitioning, consumer lag, throughput limits)
  • network constraints (latency, load balancer settings, timeouts)
Work hands-on with SRE, DevOps, and Backend teams to implement optimizations and verify the improvements.

Prepare for Production Traffic:

Ensure services are production-ready. Drive capacity planning exercises and stress tests to determine scaling lag/limits and failure modes. Verify that services can gracefully handle traffic spikes or component failures without violating SLAs or SLOs. Help partners configure autoscaling policies for sustained performance

Loadtesting Infrastructure Setup&Tuning:

Set up, audit, and operate performance testing infrastructure for our services load testing automation and geo-distributed load generator to simulate production traffic

Performance Analysis & Experimentation:

  • Approach performance issues with an performance analyst/architect mindset – form hypotheses, design experiments, run tests, and draw conclusions to configuration infrastructure and fix backend code.
  • For example, if latency is high, hypothesize the cause (e.g. database index missing), test the hypothesis, and then make a the fix. Repeat this iterative process to methodically improve system performance.

Business Impact Translation:

Translate technical performance insights into business risks and opportunities. For instance, demonstrate how reducing latency or improving p95 response time can increase conversion rates or GMV. Communicate these insights to both engineers and non-technical stakeholders to underline the business value of performance work.

Observability & Monitoring:
Utilize observability tools to monitor performance during tests. Set up dashboards and alerts in Grafana/Prometheus and other systems (e.g. New Relic, Dynatrace, Zabbix, ELK Stack, Coroot), analyze traces and logs. Use few methods to interpret results and guide troubleshooting.

Performance Tooling:
Build internal performance tooling in Golang: Design and maintain internal tools for automation performance testing and analysis Load as a Service for product teams: traffic generators, load managers, mocks, test stubs, data feeders, and helper services used across partner integrations. Building load test scenarios to Jmeter (DSL framework) and k6 in JavaScript. Innovate on tooling to make load tests easier to create, run, and analyze for the whole team.

CI/CD Integration:
Integrate performance tests into our CI/CD pipeline as QG. Automate load tests run to every new integration or code change is evaluated for performance regressions. Ensure that build pipelines can catch and flag performance drops (e.g. increased response times, higher resource usage) before changes reach production.

Reporting&Communication:
Prepare executive-ready performance reports for engineering teams and C-level stakeholders. Highlight key metrics (p95/p99 latency, throughput, error rates, saturation signals) and identify primary scalability bottlenecks across services, databases, caches, and messaging systems, and compare results against SLOs and capacity targets. Provide clear root-cause explanations and actionable recommendations, including performance vs cost trade-offs, to support production readiness and Go/No Go decisions.

Performance Culture:
  • Champion a culture where performance = product quality.
  • Educate and influence teams to treat performance testing as an integral, continuous part of the development lifecycle rather than a one-time task.
  • Share best practices and success stories, ensuring that performance considerations are baked into design, coding, and testing processes from the start.
  • Conducting production load and traffic validation experiments with full ownership of risk assessment, blast radius control, observability, and rollback procedures.
  • Work with geo-distributed, high-load systems (200k+ RPS) and latency-sensitive mobile flows
  • Set up production-like performance environments in AWS
  • Conducting and modeling load tests in production
  • Performance analytics → hypotheses → experiments → fixes → measurable business impact (latency, conversion, GMV)

Skills, Knowledge and Expertise

5+ years in Performance/SRE/Load Testing
  • Strong programming skills in Golang/Kotlin/Java
  • Strong k6 or JMeter DSL – Performance Testing Mastery: Hands-on expertise with modern load testing tools, especially k6 and/or JMeter (and their scripting DSL). Deep understanding of performance best practice – you know when to apply load testing vs stress testing vs soak testing, and how to interpret the results of each.
  • Hands-on experience with AWS infrastructure on for high-load systems: EKS, EC2, Auto Scaling Groups, ALB/NLB, VPC, subnets, routing, security groups
  • Designing and tuning autoscaling strategies for compute (KEDA/Karpenter/Cluster Autoscaler, EC2 scaling policies)
  • Understanding network performance characteristics: latency, cross-AZ traffic, NAT/egress limits, load balancer behavior under high RPS
  • Profiling Golang, Kafka, Redis, DB tuning experience
  • Automation loadtesting and make QG (JVM languages and Go)
  • SRE mindset (RED/USE, SLO, error budgets)

Benefits

  • Stable salary, official employment.
  • Health insurance.
  • Hybrid work mode and flexible schedule.
  • Relocation package offered for candidates from other regions.
  • Access to professional counseling services including psychological, financial, and legal support.
  • Discount club membership.
  • Diverse internal training programs.
  • Partially or fully paid additional training courses.
  • All necessary work equipment.