
BU Master Data Associate Director - Dev
- Hyderabad, Telangana
- Permanent
- Full-time
- Lead and scale a global Data Engineering & Operations team, fostering a culture of accountability, innovation, and continuous improvement.
- Drive operational excellence by establishing best practices for data pipeline monitoring, automation, and incident management using AI/ML-enabled observability tools.
- Oversee data infrastructure and workflows, ensuring performance, scalability, security, and compliance across cloud platforms (AWS, GCP, Azure).
- Define and track KPIs for data availability, quality, reliability, and incident resolution; present key metrics and insights to senior leadership regularly.
- Enable automation-first culture by reducing repetitive incidents and streamlining issue resolution processes through smart alerting, auto-remediation, and orchestration enhancements.
- Collaborate with cross-functional teams (data engineers, analysts, product, platform, and governance teams) to align operational priorities with enterprise data strategy.
- Ensure effective SLA adherence, capacity planning, change management, and risk mitigation across all critical data services.
- Mentor and grow talent within the team, conducting regular performance reviews, skill development planning, and succession pipeline development.
- Stay current with trends in AI for IT operations (AIOps), SRE practices, and emerging technologies to future-proof data operations.
- Leadership & Vision: Inspire and guide a high-performing team by setting clear goals, fostering accountability, and creating a culture of continuous improvement.
- Operational Excellence: Establish reliable, scalable, and efficient operational processes for monitoring, alerting, and incident response—focusing on SLAs, SLOs, and data uptime.
- AI-Enabled Optimization: Proactively identify repetitive incidents and leverage AIOps, machine learning, and automation to improve root cause analysis, reduce MTTR, and prevent recurrence.
- Cross-Functional Collaboration: Build strong partnerships with product, platform, and analytics teams to align operational priorities with business goals and data strategy.
- Strategic Communication: Translate operational KPIs, risk metrics, and platform health insights into executive-ready updates that support strategic decisions.
- Quality & Compliance Focus: Champion data quality, lifecycle management, and regulatory compliance (e.g., HIPAA, GDPR) within operational processes.
- Innovation & Scalability: Continuously evaluate tools, frameworks, and industry best practices to future-proof the data ecosystem and scale operations efficiently.
- Technical Expertise:
- Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field.
- 12+ years of experience in data engineering, platform operations, or data infrastructure roles.
- Minimum 3 years of experience leading technical teams or managing global data operations.
- Hands-on expertise with data engineering tools (e.g., Spark, Databricks, Snowflake), orchestration platforms (e.g., Airflow, Control-M), and cloud services (AWS, Azure, or GCP).
- Strong understanding of data integration, data quality, observability, and monitoring at enterprise scale.
- Demonstrated experience in managing SLAs/SLOs, resolving production issues, and driving automation to reduce operational overhead.
- Exposure to AIOps or data operations automation practices is a strong plus.
- Strong communication skills and experience presenting metrics, KPIs, and strategic updates to senior leadership.
- Domain experience in healthcare, pharmaceutical ( Customer Master, Product Master, Alignment Master, Activity, Consent etc. ), or regulated industries is a plus.
- Partner with and influence vendor resources on solution development to ensure understanding of data and technical direction for solutions as well as delivery
- AWS Certified Data Engineer - Associate
- Databricks Certified Data Engineer (Associate or Professional)
- AWS Certified Architect (Associate or Professional)
- Familiarity with AI/ML workflows and integrating machine learning models into data pipelines
- N/A