
Senior Manager
- Noida, Uttar Pradesh
- Permanent
- Full-time
- Design Databricks-based data architectures and models to meet business requirements
- Develop event-driven and scalable GenAI application architectures
- Refactor and rehost legacy applications onto cloud platforms
- Design, develop, and optimize data pipelines and ETL processes using Azure Databricks (PySpark /Spark SQL).
- Implement data security best practices and ensure compliance with industry standards
- Collaborate with data architects, analysts, and other developers to deliver data solutions aligned with business requirements.
- Translate business requirements into technical solutions
- Perform data wrangling, cleansing, transformation, and aggregation from multiple sources.
- Implement and maintain data lake and data warehouse solutions using Azure services (ADLS, Synapse, Delta Lake).
- Monitor pipeline performance, troubleshoot issues, and ensure data integrity and reliability.
- Develop and maintain CI/CD pipelines for Databricks workflows and jobs.
- Participate in code reviews, unit testing, and documentation of data processes.
- Strong experience with Databricks, Delta Lake, PySpark/Scala Spark, and Unity Catalog.
- Proficiency in Python, SQL, and familiarity with relational and NoSQL databases
- Experience with cloud platforms (Azure, AWS, GCP) and Infrastructure as Code tools
- Knowledge of CI/CD, DevSecOps, Kafka, MLFlow, and Structured Streaming
- Solid understanding of Azure Data Services: Azure Data Lake Storage (ADLS), Azure Data Factory (ADF)
- Strong problem-solving skills and ability to work independently or as part of a team.
- Databricks certifications (e.g., Certified Data Engineer, Associate Developer) preferred
- Excellent communication skills.