Hiring for Databricks_BangaloreDesign and implement data pipelines using Apache Spark on DatabricksBuild scalable ETL/ELT workflows for batch and streaming data ingestionOptimize data workflows for performance and cost-efficiencyIntegrate data from various sources including APIs, relational databases, and cloud storage (S3, ADLS, etc.)Collaborate with data scientists and analysts to prepare clean, curated, and reliable dataImplement data quality, data governance, and data cataloging solutionsMonitor, troubleshoot, and improve performance of data pipelines in productionApply DevOps practices to manage and deploy data workflows using CI/CD toolsWork with stakeholders to define data architecture and modeling standardsEnsure compliance with data security, privacy, and regulatory requirements