Databrick

Virtusa

  • Chennai, Tamil Nadu
  • Permanent
  • Full-time
  • 1 day ago
Key Responsibilities:Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks. Write efficient and production-ready PySpark or Scala code for data transformation and ETL processes. Integrate data from various structured and unstructured sources into a unified platform. Implement Delta Lake and manage data versioning, updates, and schema evolution. Optimize data processing workflows for performance, scalability, and cost efficiency. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets. Implement data quality checks, validation routines, and logging mechanisms. Monitor and debug production jobs using Databricks jobs, notebooks, and clusters. Ensure security, privacy, and compliance standards are met throughout the data lifecycle. Provide guidance and mentorship to junior team members.Required Skills & Qualifications:4 to 6 years of experience in Big Data development. Hands-on experience with Databricks (including Workflows, Notebooks, Delta Live Tables, Unity Catalog). Strong programming skills in PySpark and/or Scala. Solid understanding of Delta Lake architecture. Proficient in SQL for data analysis and transformation. Experience with cloud platforms such as Azure (Azure Data Lake, Data Factory, Synapse) or AWS (S3, Glue, Redshift). Familiarity with CI/CD for Databricks deployments (e.g., using GitHub Actions, Azure DevOps). Knowledge of data governance, cataloguing, and security best practices. Experience working in an Agile/Scrum environment.Preferred Skills:Experience with Databricks Unity Catalog and Delta Live Tables. Exposure to machine learning workflows in Databricks. Experience with Apache Airflow, Kafka, or other orchestration/messaging tools. Certifications such as Databricks Certified Data Engineer Associate/Professional, Azure, or AWS certification.

Virtusa