Sr Data Engineer
Thermo Fisher Scientific View all jobs
- Hyderabad, Telangana Andhra Pradesh
- Permanent
- Full-time
Join our innovative team as a Senior Developer and make a meaningful impact on global science and healthcare. In this role, you'll design and develop sophisticated software solutions that contribute to improving health and environmental outcomes. You'll work with cutting-edge technologies across cloud platforms, data integration, AI and enterprise systems to deliver robust applications that power our essential operations.Working in an agile environment, you'll collaborate with cross-functional teams to architect, implement and optimize solutions across various platforms including AWS, Azure, ServiceNow, and enterprise systems. You'll have the opportunity to drive technical innovation while supporting the growth of other developers and establishing best practices for software development excellence.Take ownership of the full software development lifecycle – from gathering requirements and designing solutions to deployment and continuous improvement. Whether developing cloud applications, creating data pipelines, or building AI-powered features, you'll help shape the future of scientific technology while growing your career in serving science.REQUIREMENTS:
- Proven experience (5+ years) in data engineering with a strong understanding of modern data architecture, pipelines, and operations.
- 5 years of demonstrated expertise in Databricks and Spark, Python, Apache, SQL including pipeline creation, automation, and monitoring
- 5 years or more hands-on experience crafting and administrating relational database solutions, including AWS Delta Lake, Oracle, and AWS Redshift
- 5 years or more experience handling and managing AWS products, including Spark, Glue, Kafka, Elastic Search, Lambda, S3, Redshift, and others
- Strong problem-solving skills with debugging, performance tuning, and AI/ML model deployment in cloud environments
- Prior experience with tools like Power BI, Cognos, SQL Server, and Oracle is a strong plus
- Deep understanding of data modeling, metadata management, and data governance frameworks
- Experience building, scheduling, and monitoring data workflows using Apache Airflow
- 5 years of solid experience in DevOps/DataOps or equivalent roles, version control platforms (such as GitHub), and continuous integration and delivery pipelines
- Demonstrated experience in leading engineering projects and managing project lifecycles
- A self-starter with the drive and ability to deliver complex solutions rapidly
- Communicate effectively with technical and non-technical personnel in oral and written form
- Ensure stable and timely data pipelines that support period-end and quarter-end financial close processes, enabling accurate reporting and reconciliations
- Monitor, solve, and optimize finance workflows during close cycles to guarantee data completeness, performance, and SLA compliance
- Outstanding interpersonal skills to translate sophisticated technical and AI concepts into business-aligned solutions
- Demonstrable ability to collaborate with team members across IT, business, and AI/analytics departments
- Passion for continuous learning, innovation, and staying ahead of evolving cloud, AI, and data technologies