IT Consulting
Infosys View all jobs
- Bangalore, Karnataka
- Permanent
- Full-time
Build and manage scalable data ingestion, transformation, and processing workflows
Implement ETL/ELT pipelines using Databricks, Azure Data Factory, and related tools
Develop data solutions using Azure Data Lake Storage (ADLS Gen2)
Optimize Spark jobs for performance, cost, and scalability
Implement best practices for data modeling, partitioning, and schema management
Work with structured, semi-structured, and unstructured data
Collaborate with business, analytics, and downstream BI teams
Ensure data quality, reliability, and monitoring of data pipelines
Support CI/CD, version control, and automated deploymentsAdditional Responsibilities :Databricks Certified Data Engineer (Associate / Professional)
Experience with Delta Lake
Knowledge of streaming frameworks (Structured Streaming, Event Hubs)
Exposure to BI tools (Power BI, Tableau)
Experience working in Agile / Scrum teams
Basic understanding of Python best practices and librariesTechnical and Professional Requirements :
- Primary skills:Technology->Data On Cloud - Platform->Azure Data Lake (ADL)
3+ years of hands-on experience with Azure Databricks
Strong expertise in PySpark / Spark SQL
Experience with Microsoft Azure services, including:Azure Data Factory (ADF)
Azure Data Lake Storage (ADLS)
Azure SQL / Synapse (good to have)Solid understanding of data lake / lakehouse architectures
Strong SQL skills for data transformation and analytics
Experience with workflow orchestration and scheduling
Knowledge of data warehousing and data modeling concepts
Familiarity with Git and CI/CD pipelines
Excellent problem-solving and communication skillsPreferred Skills :Technology->Cloud Platform->Azure Analytics Services->Azure DatabricksEducational Requirements :Bachelor of Engineering,Bachelor Of Technology,Bachelor Of Computer Science,Bachelor Of Science,Bachelor Of Comp. ApplicationsService Line :Data & Analytics Unit