
Bigdata Developer
- India
- Permanent
- Full-time
- Collaborate with senior developers and data engineers to design, develop, test, and deploy scalable data processing pipelines and applications.
- Write clean, efficient, and well-documented code in Java and Python for various data ingestion, transformation, and analysis tasks.
- Utilize Apache Spark for distributed data processing, focusing on performance optimization and resource management.
- Work with Apache Iceberg tables for managing large, evolving datasets in our data lake, ensuring data consistency and reliability.
- Assist in troubleshooting, debugging, and resolving issues in existing data pipelines and applications.
- Participate in code reviews, contributing to a high standard of code quality and best practices.
- Learn and adapt to new technologies and methodologies as the project requirements evolve.
- Contribute to the documentation of technical designs, processes, and operational procedures.
- 2-5 years of relevant experience
- Bachelor's degree in Computer Science, Software Engineering, Data Science, or a related technical field.
- Strong foundational knowledge of object-oriented programming principles.
- Proficiency in at least one of the following programming languages: Java or Python.
- Basic understanding of data structures, algorithms, and software development lifecycles.
- Familiarity with version control systems (e.g., Git).
- Eagerness to learn and a strong passion for software development and data technologies.
- Excellent problem-solving skills and attention to detail.
- Good communication and teamwork abilities.
- Bachelor’s degree/University degree or equivalent experience
- Familiarity with distributed computing concepts.
- Basic understanding of Apache Spark or experience with data processing frameworks.
- Exposure to cloud platforms (AWS, Azure, GCP).
- Knowledge of SQL and database concepts.
- Any experience or coursework related to data lakes, data warehousing, or Apache Iceberg.