
Senior Data Engineer - Go And Python + Apache Kafka
- Hyderabad, Telangana
- Permanent
- Full-time
- Designing, coding, testing, debugging, documenting, and supporting the application consistent with established specifications and business requirements to deliver business value
- Aggregate and transform raw data coming from a variety of data sources to fulfill the functional and non-functional business needs
- Drive Automation and create reusable components to help build efficiency in the team
- Take initiatives, be a self-starter and perform as an integral part of the development team
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
- Education and Experience
- Bachelor’s degree or equivalent professional experience
- 2+ years of hands-on experience in ELT development and data analysis using dbt, SQL, and Snowflake
- Data Engineering and Modeling
- Experience designing and orchestrating workflows using Apache Airflow
- Proficient in building scalable data transformation pipelines using DBT for Snowflake
- Proven solid analytical skills in data extraction, manipulation, and reporting
- Domain Expertise
- Solid understanding of the healthcare domain, including claims, member, and provider data
- Technical Proficiencies
- Cloud Technologies: Hands-on experience with Microsoft Azure, particularly Azure Data Factory (ADF) and Azure Databricks
- Data Transformation and Modeling: Expertise in dbt (Data Build Tool) for modular and maintainable data pipelines
- CI/CD and DevOps: Familiar with Jenkins and GitHub Actions for continuous integration and deployment
- AI and Automation: Exposure to Generative AI tools including Microsoft Copilot Studio, Power Automate, and M365 Copilot for intelligent automation and productivity enhancement
- Data Warehousing: Proficient in Snowflake for cloud-native data warehousing and analytics
- Programming and Big Data: Proven skilled in Python for scripting and automation; experience with Apache Spark and Scala for distributed data processing
- Demonstrated ability to design solutions independently based on high-level architecture
- Demonstrated ability to prioritize and manage multiple tasks
- Demonstrated ability to quickly learn new business area or technical skills based on project needs
- Demonstrated ability to be flexible to adapt to changes, supporting continuous improvement, and perform multiple responsibilities as required by the project team
- Proven excellent verbal, written communication, and presentation skills
- Proven solid facilitation, critical thinking, problem solving, decision making and analytical skills
- Visualization and ETL Tools
- Experience with Informatica for ETL development
- Familiarity with data visualization tools such as Tableau and/or Power BI