Applied AI Engineer (Automation)
Fusemachines View all jobs
- Pune, Maharashtra
- Permanent
- Full-time
- Design & Deploy: Design, develop, and deploy tailored AI and automation solutions aligned to client objectives.
- Build Workflows & Services: Translate business problems into production-grade AI workflows and services using Python, automation tools (n8n/Make/Zapier or similar), and LLM platforms/APIs (e.g., OpenAI, IBM watsonx.ai, Amazon Bedrock), plus retrieval systems.
- Agentic Systems: Build and deploy agentic workflows using LangChain, LangGraph, and Google ADK, including tool calling and structured outputs.
- Retrieval & Knowledge Systems: Implement RAG pipelines using vector databases and search technologies (e.g., Pinecone, Elasticsearch, pgvector) and graph databases when appropriate.
- Prototype → Production: Ship fast prototypes, then harden them into scalable systems (testing, reliability, deployment, monitoring) independently or with a team.
- Client Partnership: Participate in discovery, run technical calls/demos when needed, and communicate tradeoffs clearly to client and internal stakeholders.
- Ongoing Support & Iteration: Improve deployed solutions through feature work, bug fixes, monitoring, prompt/model improvements, and additional automations.
- Documentation: Produce clear technical documentation, client demos, and internal playbooks to enable reuse and scalability.
- Continuous Learning: Stay current on LLM tooling and delivery best practices to improve quality and speed.
- Solutions consistently meet or exceed client expectations and show measurable impact (time saved, cost reduced, improved conversion/deflection, faster cycle time).
- Clients trust you as a go-to engineering partner and expand usage of deployed AI workflows.
- Deliveries are production-ready: monitored, testable, documented, and maintainable.
- 3–8 years of software or AI engineering experience (mid-to-senior).
- 2–3+ years of AI Automation, Generative AI, or Agentic AI (mid-to-senior).
- Strong Python engineering skills and experience building APIs/services (e.g., FastAPI).
- Hands-on experience integrating LLMs (e.g., OpenAI APIs or equivalents), including prompt design, structured outputs, and basic evaluation practices.
- Experience with at least one workflow automation platform (n8n, Make, Zapier, or similar) and building reliable integrations.
- Familiarity with RAG fundamentals and retrieval systems (embeddings, vector search); exposure to vector databases and/or Elasticsearch.
- Production engineering fundamentals: Docker, cloud deployment (AWS/GCP/Azure/IBM), and experience with async/queuing patterns (e.g., Celery, Redis, Kafka).
- Comfort operating in a client-facing environment: technical calls, demos, and collaborating with cross-functional stakeholders.
- Experience with fine-tuning LLMs or other ML models; broader ML exposure is a plus (not required).
- Familiarity with observability and tracing (e.g., LangSmith, OpenTelemetry) and prompt/version lifecycle management.
- Experience with graph databases / knowledge graphs.
- Familiarity with data governance and AI governance concepts (PII handling, auditability, access controls, risk awareness).
- Prior consulting experience or work in fast-paced startup environments.