
Big Data Engineer (AWS)
- Bangalore, Karnataka
- Permanent
- Full-time
- Proficiency in Python (Mandatory skill) is required to efficiently handle data processing tasks and pipeline creation.
- Experience with PySpark (Mandatory skill) for scalable data analytics and large data set processing is essential.
- Expertise in Databricks (Mandatory skill) to collaborate and transform data in a cloud-based environment is necessary.
- Strong knowledge of Amazon Redshift for creating and managing data warehouses and ensuring data storage efficiency.
- Experience with AWS Glue for developing and managing ETL jobs to prepare data for analytics.
- Familiarity with AWS Lambda for executing code in response to triggers, ensuring a flexible data processing environment.
- Experience with Hadoop frameworks for processing large volumes of data and managing data storage effectively.
- Advanced SQL skills for querying, managing, and structuring data stored in relational databases.
- Design and implement scalable data processing pipelines to handle large data sets and support business goals.
- Collaborate with clients and team members to gather data requirements and ensure accurate data structuring.
- Optimize data architectures for performance, reliability, and scalability within cloud environments.
- Utilize AWS services to develop innovative data solutions tailored to client needs.
- Conduct regular data monitoring and maintenance to ensure data integrity and security.
- Analyze and interpret complex data sets, providing actionable insights to stakeholders.
- Stay updated with emerging trends and technologies in big data engineering to drive continuous improvement.
- Participate in code reviews and contribute to maintaining coding standards and practices.
Expertia AI Technologies