
Technical Lead - Data
- Bangalore, Karnataka
- Permanent
- Full-time
- Take ownership and be responsible for what you build - no micromanagement
- Work with A players (some of the best talents in the country), and expedite your learning curve and career growth
- Make in India and build for the world at the scale of 1.2 billion active users, which no other internet company in the country has seen
- Learn together from different teams on how they scale to millions of users and billions of messages.
- Explore the latest in topics like Data Pipeline, MongoDB, ElasticSearch, Kafka, Spark, Samza and share with the team and more importantly, have fun while you work on scaling MoEngage.
- Achieving 99.99 percentile SLA for ingestion pipeline latency.
- Enhancing System Reliability by improving Data accuracy and preventing Data loss.
- Scaling pipelines to 5X current capacity while maintaining infrastructure cost efficiency.
- Improving the overall System performance and Observability.
- Lead technical initiatives: Own the architecture and delivery of complex, high-scale distributed ingestion and processing systems.
- Design & develop: Build scalable, reliable, and cost-efficient REST/gRPC services and streaming pipelines.
- Optimize performance: Improve data processing throughput, latency, and system resilience.
- Mentor & review: Guide engineers, conduct code reviews, and instill best practices in design and development.
- Innovate at scale: Select and implement the right tools, frameworks, and architectural patterns to meet extreme data and concurrency demands.
- Collaborate cross-functionally: Work with product managers, SREs, and other engineering teams to deliver robust data solutions.
- Champion quality: Advocate for test-driven development, operational excellence, and monitoring-first architectures.
- 8-12 years of experience designing and building high-scale, data-intensive applications.
- Deep expertise in Java (and relevant frameworks) for backend and data engineering.
- Proven experience with distributed data processing systems and ingestion pipelines at scale (mandatory).
- Strong understanding of data modeling, database design, and performance tuning (SQL & NoSQL) (mandatory).
- Hands-on with real-time data processing technologies: Apache Kafka, Flink or Spark (mandatory).
- Solid experience with AWS cloud services, containerization, and orchestration (Docker, Kubernetes).
- Strong engineering fundamentals - algorithms, data structures, concurrency, and distributed systems.