Job Description: Job Description • 8+ years of experience in ETL with minimum of 3+ years of experience in PySpark/Spark SQL • Strong SQL skills with experience in Azure SQL & Synapse • Strong Skill in developing Synapse Notebooks , Azure Data Bricks • Experience in ETL (SAS/Informatica/ADF Data flows) • Experience implementing Synapse Pipelines/Dataflows • Use the interactive Synapse/Databricks notebook environment using SQL, Examine external data sets, Query existing data sets using SQL • ETL transformations and Loads using Synapse Notebooks/ Azure Databricks to apply built-in functions to manipulate data, • Perform an ETL job on a streaming data source, Parameterize a code base and manage task dependencies, Submit and monitor jobs using the REST API or Command Line Interface • Manage Delta lake using interactive Synapse notebook environment, Create, append and upsert data into a data lake. Primary Responsibilities • Develop ETL processing and Data extraction using Synapse Notebooks / Azure Databricks. • Develop Apache Spark SQL using Python to examine and query datasets. • Develop Dataframes for ETL transformation and loads • Capture audit information during all phases of the ETL transformation process. • Write and maintain documentation of the ETL processes via process flow diagrams. • Collaborate with business users, support team members, and other developers throughout the organization to help everyone understand issues that affect the data warehouse.