Key Responsibilities:Design, develop, and maintain scalable big data solutionsBuild and optimize data pipelines using tools like Spark, Kafka, and HiveDevelop ETL processes to ingest and transform large volumes of data from multiple sourcesCollaborate with data scientists, analysts, and business stakeholders to support data needsImplement data quality, monitoring, and governance frameworksOptimize data storage and query performance on distributed systemsWork with cloud-based platforms like AWS, Azure, or GCP for big data workloadsEnsure data security, compliance, and privacy standards are met