
Sr. Software Engineer - Hadoop Job
- Bangalore, Karnataka
- Permanent
- Full-time
- Develop data pipelines by ingesting various structured and unstructured data sources into Palantir Foundry.
- Participate in end-to-end project lifecycles, from requirements analysis to deployment and operations.
- Act as a business analyst for developing requirements related to Foundry pipelines.
- Review code developed by other data engineers, ensuring adherence to platform standards and functional specifications.
- Document technical work professionally and create high-quality technical documentation.
- Balance technical feasibility with strict business requirements.
- Deploy applications on Foundry platform infrastructure with clearly defined checks.
- Implement changes and bug fixes following clients change management framework.
- Work in DevOps project setups following Agile principles (e.g., Scrum).
- Act as third-level support for critical applications, resolving complex incidents and debugging problems across the full stack.
- Work closely with business users, data scientists, and analysts to design physical data models.
- Provide support in designing ETL/ELT processes with databases and Hadoop platforms.
- Bachelor's degree or higher in Computer Science, Engineering, Mathematics, Physical Sciences, or related fields.
- 5+ years of experience in system engineering or software development.
- 3+ years of experience in engineering with a focus on ETL work involving databases and Hadoop platforms..
- Hadoop General: Deep knowledge of distributed file system concepts, map-reduce principles, and distributed computing. Familiarity with Spark and its differences from MapReduce.
- Data Management: Proficient in technical data management tasks such as reading, transforming, and storing data, including experience with XML/JSON and REST APIs.
- Spark: Experience in launching Spark jobs in both client and cluster modes, with an understanding of property settings that impact performance.
- Application Development: Familiarity with HTML, CSS, JavaScript, and basic visual design competencies.
- SCC/Git: Experienced in using source code control systems like Git.
- ETL/ELT: Experience developing ETL/ELT processes, including loading data from enterprise-level RDBMS systems (e.g., Oracle, DB2, MySQL).
- Authorization: Basic understanding of user authorization, preferably with Apache Ranger.
- Programming: Proficient in Python, with expertise in at least one high-level language (e.g., Java, C, Scala). Must have experience using REST APIs.
- SQL: Expertise in SQL for manipulating database data, including views, functions, stored procedures, and exception handling.
- AWS: General knowledge of the AWS stack (EC2, S3, EBS, etc.).
- IT Process Compliance: Experience with SDLC processes, change control, and ITIL (incident, problem, and change management).
- Strong problem-solving skills with an analytical mindset.
- Excellent communication skills to collaborate with both technical and non-technical teams.
- Experience working in Agile/DevOps teams, utilizing Scrum principles.
- Ability to thrive in a fast-paced, dynamic environment while managing multiple tasks.
- Strong organizational skills with attention to detail.
- Flexible work arrangements, Free spirit, and emotional positivity
- Agile self-determination, trust, transparency, and open collaboration
- All Support needed for the realization of business goals,
- Stable employment with a great atmosphere and ethical corporate culture