Hiring for TCS_BigData and Hadoop_BangaloreKey ResponsibilitiesDesign and implement Big Data pipelines using technologies like Hadoop, Hive, Spark, and Sqoop.Develop scalable ETL processes for batch and real-time data ingestion and processing.Work with large datasets in a distributed computing environment.Optimize and tune MapReduce, Spark, and Hive queries for performance.Manage and monitor Hadoop clusters (using Cloudera, Hortonworks, or Amazon EMR).Collaborate with analytics, product, and engineering teams to understand data needs.Ensure data quality, governance, and security across big data platforms.Troubleshoot issues related to data jobs, cluster performance, and data integrity