Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs.Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions.Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real time data, and provide best practices for pipeline operationsIdentify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalabilityTroubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers.Implement Data Governance best practices. Comprehensive knowledge of Functional and technical impact analysis.Provide advice and ideas for technical solutions and improvements to data systemsCreate and maintain clear documentation on data models/schemas as well as transformation/validation rulesImplement tools that help data consumers to extract, analyse, and visualize data faster through data pipelinesLeading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's.Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated Migrate current data applications & pipelines to Cloud (AWS) leveraging PaaS te7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS) and Data Governance domains.