Key Responsibilities:Design and develop robust ETL pipelines using Python, PySpark, and GCP services.Build and optimize data models and queries in BigQuery for analytics and reporting.Ingest, transform, and load structured and semi-structured data from various sources.Collaborate with data analysts, scientists, and business teams to understand data requirements.Ensure data quality, integrity, and security across cloud-based data platforms.Monitor and troubleshoot data workflows and performance issues.Automate data validation and transformation processes using scripting and orchestration tools.Required Skills & Qualifications:Hands-on experience with Google Cloud Platform (GCP), especially BigQuery.Strong programming skills in Python and/or PySpark.Experience in designing and implementing ETL workflows and data pipelines.Proficiency in SQL and data modeling for analytics.Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer.Understanding of data governance, security, and compliance in cloud environments.Experience with version control (Git) and agile development practices.