Hiring for BigData and Hadoop_ChennaiDesign and implement big data pipelines and ETL workflows using Hadoop ecosystem tools.Work with large-scale datasets to ingest, transform, and store data efficiently.Develop Hive/Spark scripts for data transformation, aggregation, and reporting.Optimize performance of Hadoop jobs and data processing frameworks.Collaborate with data scientists, analysts, and other engineers to understand data requirements.Implement and maintain data quality and monitoring solutions.Ensure data governance and security best practices across all stages of data lifecycle.