key Skills: AWS(GLue),Python, Pyspark, ETL, SQL Job Description: 4Y+ Years of experience as a Data Engineer on the AWS Strong technical expertise in Python and SQL Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark) Solid experience with AWS services such as Cloud Formation, S3, Athena, Glue, Glue Data Brew, EMR/Spark, RDS, Redshift, Data Sync, DMS, DynamoDB, Lambda, Step Functions, IAM, KMS, SM, Event Bridge, EC2, SQS, SNS, Lake Formation, Cloud Watch, Cloud Trail Responsible for building, test, QA & UAT environments using Cloud Formation. Build & implement CI/CD pipelines for the EDP Platform using Cloud Formation and Jenkins Good to Have: Implement high-velocity streaming solutions and orchestration using Amazon Kinesis, AWS Managed Airflow, and AWS Managed Kafka (preferred) Solid experience building solutions on AWS data lake/data warehouse. Analyse, design, Development, and implement data ingestion pipeline in AWS. Knowledge of implementing ETL/ELT for data solutions end to end. Ingest data from Rest APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift