
Manager
- Gurgaon, Haryana
- Permanent
- Full-time
- Design, develop, and maintain scalable data pipelines and ETL workflows using Apache Spark,
- Build and support Java-based backend components, services, or APIs as part of end-to-end data
- Work with large-scale datasets to support transformation, integration, and real-time analytics.
- Optimize Spark, SQL, and Java processes for performance, scalability, and reliability.
- Collaborate with cross-functional teams to understand business requirements and deliver robust
- Follow engineering best practices in coding, testing, version control, and deployment.
- 5+ years of hands-on experience in software or data engineering.
- Proven experience in developing ETL pipelines using Java and Spark.
- Strong programming experience in Java (preferably with frameworks such as Spring or Spring Boot).
- Experience in Big Data tools including Apache Spark, Apache Airflow, and cloud services such as
- Proficiency in SQL and experience with performance tuning for large datasets.
- Familiarity with data modeling, warehousing, and distributed systems.
- Experience working in Agile development environments.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills
- Experience building and integrating RESTful APIs or microservices using Java.
- Exposure to data platforms like Snowflake, Databricks, or Kafka.
- Background in retail, merchandising, or media domains is a plus.
- Familiarity with CI/CD pipelines, DevOps tools, and cloud-based development workflows.