Key Responsibilities:Design, develop, and maintain scalable and high-performance big data processing solutions.Build data pipelines using big data technologies (e.g., Hadoop, Spark, Hive, Kafka, etc.).Perform data extraction, transformation, and loading (ETL) from various sources.Optimize data processing workflows for performance and reliability.Work closely with data engineers, data scientists, and analysts to ensure high-quality data delivery.Implement data governance, data quality, and security best practices.Troubleshoot issues related to big data jobs, data inconsistencies, and performance bottlenecks.Maintain documentation and ensure code is reusable and well-structured.Participate in Agile development practices including sprint planning, code reviews, and continuous integration.