Company DescriptionBlend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visitJob DescriptionYou will be a key member of our Data Engineering team, focused on designing, developing, and maintaining robust data solutions on on-prem environments. You will work closely with internal teams and client stakeholders to build and optimize data pipelines and analytical tools using Python, Scala, SQL, Spark and Hadoop ecosystem technologies. This role requires deep hands-on experience with big data technologies in traditional data centre environments (non-cloud).What you’ll be doingDesign, build, and maintain on-prem data pipelines to ingest, process, and transform large volumes of data from multiple sources into data warehouses and data lakesDevelop and optimize Scala-Spark and SQL jobs for high-performance batch and real-time data processingEnsure the scalability, reliability, and performance of data infrastructure in an on-prem setupCollaborate with data scientists, analysts, and business teams to translate their data requirements into technical solutionsTroubleshoot and resolve issues in data pipelines and data processing workflowsMonitor, tune, and improve Hadoop clusters and data jobs for cost and resource efficiencyStay current with on-prem big data technology trends and suggest enhancements to improve data engineering capabilitiesQualificationsBachelor's degree in software engineering, or a related field5+ years of experience in data engineering or a related domainStrong programming skills in Python & ScalaExpertise in SQL with a solid understanding of data warehousing conceptsHands-on experience with Hadoop ecosystem components (e.g., HDFS, Hive, Apache Hudi, Iceberg and Delta Lake)Proven ability to design and manage data solutions in on-prem environments (no cloud dependency)3rd party data integrations from different sources (including APIs)Proficiency in Airflow or similar orchestration toolStrong problem-solving skills with an ability to work independently and collaborativelyExcellent communication skills and ability to engage with technical and non-technical stakeholders