JD for Bigdata Developer.Key Responsibilities:Big Data Development:Design, develop, and maintain big data processing pipelines using tools like Apache Hadoop, Spark, Kafka, and other big data technologies.Implement data ingestion and processing solutions for large, diverse data sources.Work on data transformation, processing, and optimization techniques to ensure high performance.Data Modeling & ETL:Develop and maintain complex ETL processes to integrate and transform data from various sources (structured, semi-structured, and unstructured data).Perform data modeling, database optimization, and ensure scalability and fault tolerance.Data Architecture & Optimization:Work closely with the Data Architects to build robust, scalable data architectures.Troubleshoot and optimize SQL queries, data processing workflows, and big data jobs for maximum performance.Collaboration & Communication:Collaborate with data scientists, business analysts, and other stakeholders to understand business requirements and implement solutions accordingly.Communicate technical concepts effectively to non-technical stakeholders.Testing and Quality Assurance:Develop and implement unit tests and integration tests for data processing jobs.Ensure data accuracy, consistency, and integrity in all data workflows.Documentation:Document code, procedures, and technical specifications for future reference.Provide insights into the design and implementation of data pipelines and architectures.