
AWS Data Engineer
- Haryana
- Permanent
- Full-time
- Design, develop, and maintain scalable data pipelines and ETL workflows on AWS.
- Work extensively with Snowflake, Python, and SQL to process, transform, and analyze data.
- Optimize and maintain existing data architectures to ensure high performance and cost efficiency.
- Integrate data from multiple sources into a unified data warehouse.
- Collaborate with data analysts, data scientists, and business stakeholders to meet data requirements.
- Implement data governance, quality checks, and best practices in data engineering.
- Leverage AWS services such as Redshift and Lambda for data processing tasks.
- Use DBT (if applicable) for data modeling and transformation in the ELT framework.
- Troubleshoot data pipeline issues and perform root cause analysis.
- Snowflake – Data warehouse development, performance tuning, and query optimization
- Python – Scripting for automation and data processing
- SQL – Advanced query writing and optimization skills
- AWS Glue
- DBT – Data transformation and modeling
- AWS Redshift – Data warehousing and analytics
- AWS Lambda – Serverless data processing
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 1.6 – 6 years of hands-on experience in data engineering roles.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Good communication and collaboration skills.