
Manager
- Gurgaon, Haryana
- Permanent
- Full-time
- Design, build, and maintain scalable data pipelines and ETL/ELT processes for analytics and ML workloads.
- Implement MLOps frameworks to manage model lifecycle (training, deployment, monitoring, and retraining).
- Apply DevOps best practices (CI/CD, containerization, infrastructure as code) to ML and data engineering workflows.
- Develop and optimize data models, feature stores, and ML serving architectures.
- Collaborate with AI/ML teams to integrate models into production environments.
- Support agent development using MCP/OpenAPI to MCP wrapper and A2A (Agent-to-Agent) communication protocols.
- Ensure data quality, governance, and compliance with security best practices.
- Troubleshoot and optimize data workflows for performance and reliability.
- Core:
- 6+ years in analytics and data engineering roles.
- Proficiency in SQL, Python, and data pipeline orchestration tools (e.g., Airflow, Prefect).
- Experience with distributed data processing frameworks (e.g., Spark, Databricks).
- ML/MLOps:
- Experience deploying and maintaining ML models in production.
- Knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.).
- DevOps:
- Hands-on experience with CI/CD (Jenkins, GitHub Actions, GitLab CI).
- Proficiency with Docker, Kubernetes, and cloud-based deployment (AWS, Azure, GCP).
- Specialized:
- Experience with MCP/OpenAPI to MCP wrapper integrations.
- Experience working with A2A protocols in agent development.
- Familiarity with agent-based architectures and multi-agent communication patterns.
- Master’s degree in Computer Science, Data Engineering, or related field.
- Experience in real-time analytics and streaming data pipelines (Kafka, Kinesis, Pub/Sub).
- Exposure to LLM-based systems or intelligent agents.
- Strong problem-solving skills and ability to work in cross-functional teams.