Job Description: Job Description - AWS Data Engineer (6+ Years Experience) Position: Senior Data Engineer - AWS Experience: 6+ years About the RoleWe are seeking a highly skilled AWS Data Engineer with strong expertise in designing, building, and optimizing large-scale data pipelines and data lake/warehouse solutions on AWS. The ideal candidate will have extensive experience in data engineering, ETL development, cloud-based data platforms, and modern data architecture practices. Key ResponsibilitiesDesign, build, and maintain scalable data pipelines and ETL workflows using AWS services.Develop, optimize, and maintain data lake and data warehouse solutions (e.g., S3, Redshift, Glue, Athena, EMR, Snowflake on AWS).Work with structured and unstructured data from multiple sources, ensuring data quality, governance, and security.Collaborate with data scientists, analysts, and business stakeholders to enable analytics and AI/ML use cases.Implement best practices for data ingestion, transformation, storage, and performance optimization.Monitor and troubleshoot data pipelines to ensure reliability and scalability.Contribute to data modeling, schema design, partitioning, and indexing strategies.Support real-time and batch data processing using tools like Kinesis, Kafka, or Spark.Ensure compliance with security and regulatory standards (IAM, encryption, GDPR, HIPAA, etc.). Required Skills & Experience6+ years of experience in Data Engineering, with at least 3+ years on AWS cloud ecosystem.Strong programming skills in Python, PySpark, or Scala.Hands-on experience with AWS services:Data Storage: S3, DynamoDB, RDS, RedshiftData Processing: Glue, EMR, Lambda, Step FunctionsQuery & Analytics: Athena, Redshift Spectrum, QuickSightStreaming: Kinesis / MSK (Kafka)Strong experience with SQL (query optimization, stored procedures, performance tuning).Knowledge of ETL/ELT tools (Glue, AWS Data Pipeline, Informatica, Talend, DBT preferred).Experience with data modeling (dimensional, star/snowflake schema).Knowledge of DevOps practices for data (CI/CD, IaC using Terraform/CloudFormation).Familiarity with monitoring & logging tools (CloudWatch, Datadog, ELK, Prometheus).Strong understanding of data governance, lineage, cataloging (Glue Data Catalog, Collibra, Alation). Preferred Skills (Good to Have)Experience with Snowflake, Databricks, or Apache Spark on AWS.Exposure to machine learning pipelines (SageMaker, Feature Store).Knowledge of containerization & orchestration (Docker, Kubernetes, ECS, EKS).Exposure to Agile methodology and DataOps practices.AWS certifications (AWS Certified Data Analytics - Specialty / Solutions Architect / Big Data). EducationBachelor's/Master's degree in Computer Science, Information Technology, Data Engineering, or related field.