Key Responsibilities:Design and implement robust, scalable, and high-performance data pipelines using GCP services (e.g., BigQuery, Dataflow, Pub/Sub, Cloud Composer).Develop ETL/ELT processes to ingest, transform, and process large volumes of structured and unstructured data.Work with stakeholders to gather data requirements and translate them into technical specifications.Ensure data quality, integrity, security, and compliance across the data pipeline.Optimize query performance in BigQuery and other data stores.Monitor and troubleshoot data pipeline performance and reliability.Collaborate with data scientists, analysts, and application developers to support data initiatives.Maintain and enhance data models, schemas, and metadata repositories.