Data Operation Lead
WPP View all jobs
- Chennai, Tamil Nadu
- Permanent
- Full-time
- Lead the team responsible for data integration, orchestration, and platform operations across Azure and GCP environments
- Build and maintain automated CI/CD pipelines using GitHub Actions and Databricks Asset Bundles for data workflows
- Design platform-wide monitoring, logging, and alerting covering pipeline execution, data quality, SLA compliance, and cost
- Manage production deployments and own centralized job orchestration via Databricks Jobs/Workflows (scheduling, dependencies, retries, notifications)
- Implement comprehensive data quality testing and validation using dbt's testing framework and custom checks
- Create and roll out standard operating procedures, runbooks, onboarding guides, and automation patterns
- Establish reusable deployment frameworks and project templates for consistent implementation across markets
- Build and maintain the Central Data Solution Repository with reusable assets (dlthub pipelines, dbt models, tests, macros)
- Define repeatable deployment patterns ensuring standards and best practices are followed globally
- Set up and manage SLAs, incident workflows, and escalation models
- Proactively identify and resolve operational risks in cloud-based data platforms
- Partner with DevOps to leverage Databricks Unity Catalog for governance, access control, and lineage
- Enforce data quality standards and governance policies embedded within pipeline code
- Lead and mentor the Data Integration & Operations team in Chennai
- Shape the team's operating model, priorities, and capabilities
- Act as subject matter expert and escalation point for technical operations
- Facilitate regular retrospectives to drive continuous improvement and optimize processes
- Collaborate with Data Engineering, DevOps, and Product teams globally
- 7+ years in data operations, platform engineering, or data engineering with hands-on production experience
- 3+ years in technical leadership or team lead capacity, with proven ability to build and scale teams
- Demonstrated experience establishing DataOps practices and operational frameworks from the ground up
- Deep, hands-on experience with Databricks platform, specifically:
- Databricks SQL Warehouses, Unity Catalog, Jobs/Workflows
- Databricks Asset Bundles for Infrastructure as Code
- Delta Lake performance optimization
- Strong proficiency with dbt (Core): project structure, macros, tests, documentation, CI/CD integration, production troubleshooting
- Expert-level cloud platform knowledge (Azure and/or GCP):
- Object storage (ADLS/GCS), secret management (Key Vault/Secret Manager)
- Infrastructure as Code (Terraform)
- CI/CD mastery with GitHub Actions: workflow design, secrets management, automated testing and deployment
- Proficient in Python for automation and scripting (experience with dlthub is a major plus)
- Strong SQL skills for data validation, quality testing, and troubleshooting
- Proven ability to design operational frameworks: monitoring, alerting, incident management, SLA tracking
- Experience managing production data platforms with high uptime requirements
- Excellent communication skills with ability to articulate technical concepts to diverse audiences
- Strong documentation skills and commitment to knowledge sharing
- Experience working in multi-cloud environments (Azure + GCP)
- Familiarity with Power BI or similar BI platforms
- Understanding of FinOps practices for cloud cost optimization
- Real, hands-on project experience building and running production data platforms at scale
- Proven leadership in establishing DataOps practices and growing high-performing teams
- Deep technical expertise in the Databricks ecosystem and modern data stack
- Entrepreneurial mindset-comfortable with ambiguity and building systems from the ground up