
Sr Machine Learning Engineer
- Hyderabad, Telangana
- Permanent
- Full-time
- Engineer end-to-end ML pipelines—data ingestion, feature engineering, training, hyper-parameter optimisation, evaluation, registration and automated promotion—using Kubeflow, SageMaker Pipelines, Open AI SDK or equivalent MLOps stacks.
- Harden research code into production-grade micro-services, packaging models in Docker/Kubernetes and exposing secure REST, gRPC or event-driven APIs for consumption by downstream applications.
- Build and maintain full-stack AI applications by integrating model services with lightweight UI components, workflow engines or business-logic layers so insights reach users with sub-second latency.
- Optimise performance and cost at scale—selecting appropriate algorithms (gradient-boosted trees, transformers, time-series models, classical statistics), applying quantisation/pruning, and tuning GPU/CPU auto-scaling policies to meet strict SLA targets.
- Instrument comprehensive observability—real-time metrics, distributed tracing, drift & bias detection and user-behaviour analytics—enabling rapid diagnosis and continuous improvement of live models and applications.
- Embed security and responsible-AI controls (data encryption, access policies, lineage tracking, explainability and bias monitoring) in partnership with Security, Privacy and Compliance teams.
- Contribute reusable platform components—feature stores, model registries, experiment-tracking libraries—and evangelise best practices that raise engineering velocity across squads.
- Perform exploratory data analysis and feature ideation on complex, high-dimensional datasets to inform algorithm selection and ensure model robustness.
- Partner with data scientists to prototype and benchmark new algorithms, offering guidance on scalability trade-offs and production-readiness while co-owning model-performance KPIs.
- 3-5 years in AI/ML and enterprise software.
- Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem.
- Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale.
- Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel).
- Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines).
- Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives.
- Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives.
- Experience in Biotechnology or pharma industry is a big plus
- Published thought-leadership or conference talks on enterprise GenAI adoption.
- Master’s degree in Computer Science and or Data Science
- Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery.
- Master’s degree with 6-11 + years of experience in Computer Science, IT or related field
- Bachelor’s degree with 8-13 + years of experience in Computer Science, IT or related field
- Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus.
- Excellent analytical and troubleshooting skills.
- Strong verbal and written communication skills
- Ability to work effectively with global, virtual teams
- High degree of initiative and self-motivation.
- Ability to manage multiple priorities successfully.
- Team-oriented, with a focus on achieving team goals.
- Ability to learn quickly, be organized and detail oriented.
- Strong presentation and public speaking skills.