Forward Deployed Engineer – Data & AI Presales
Prodapt View all jobs
- Chennai, Tamil Nadu
- Permanent
- Full-time
ResponsibilitiesKey Responsibilities * Customer Engagement & Presales SolutioningParticipate in requirement discovery workshops, stand-ups with customer teams, and problem framing sessions as part of deal pursuits.Align on problem statements, success metrics, and AI readiness assessments with client stakeholders.Lead AI art-of-the-possible sessions and executive demos that translate platform capabilities into business outcomes.Develop technical proposals, solution architectures, effort estimations, and competitive differentiators under tight deal timelines.Support the Americas Presales Lead across RFPs, proactive pursuits, and strategic multi-year transformation deals. * Rapid Prototyping & Applied AI EngineeringBuild quick PoCs — AI agents, intelligent apps, dashboards, and integrations — to validate ideas and demonstrate value during deal cycles.Integrate APIs, data sources, LLM services, and platform capabilities into working prototypes.Demo early mockups and functional prototypes to gather client feedback and iterate rapidly.Design end-to-end AI solution architectures covering GenAI (LLMs, RAG, AI agents, prompt engineering, fine-tuning), classical ML (classification, regression, clustering, time series), NLP, and computer vision.Build interactive demo applications using Streamlit, Gradio, FastAPI, React/Next.js to showcase AI solutions in client workshops.Leverage and contribute to organisational accelerators (Agent Craft, Agent Factory, AI tools) to speed up presales delivery.Minimum one year of dedicated AI presales experience is required — crafting and demonstrating AI solutions for deal pursuits, not just delivery. * Data Science, Data Engineering & ExplorationPerform rapid data exploration, profiling, and quality assessment on client datasets to evaluate AI readiness and opportunity sizing.Apply data science techniques to support presales — EDA, statistical modelling, hypothesis validation, feature engineering, and rapid model prototyping.Design modern data architectures bridging engineering and science — lakehouse patterns, feature stores, real-time pipelines, and governed data products.Build data pipelines, transformation layers, and data wrangling workflows that feed AI/ML workloads.Set up pipelines and databases, apply evaluation matrices (accuracy, latency, bias) to validate solution approaches.Work across Databricks, Snowflake, and Google Cloud Data & Analytics (BigQuery, Vertex AI, Dataflow, Dataplex). * Software Engineering & DevOps ExcellenceBring production-grade software engineering discipline to all presales assets — clean architecture, TDD, documentation, reproducibility, and scalability.Develop across the full stack: backend (Python, FastAPI, REST APIs), frontend (React/Next.js, streaming UIs), and DevOps (Docker, Terraform, CI/CD).Apply MLOps and LLMOps practices including experiment tracking (MLflow, Weights & Biases), model registries, evaluation harnesses with quality gates in CI, tracing (OpenTelemetry), and monitoring (Prometheus/Grafana).Implement guardrails, safety policies, and red-teaming approaches for AI solutions.Build and maintain reusable demo environments, accelerators, estimation templates, and proof-of-concept kits. * Data Modernisation for AIDesign migration strategies from legacy platforms (Oracle, Teradata, Netezza) to AI-ready cloud architectures.Articulate the modernisation journey — Data Warehouse → Lakehouse → AI Platform — with clear value at each stage.Ensure modernised architectures are optimised for AI workloads: feature engineering, training pipelines, model serving, and feedback loops.Integrate AI-assisted modernisation techniques including automated code conversion, intelligent data mapping, and AI-powered testing. * Thought Leadership & Digital PresenceThis is a non-negotiable requirement. The candidate must have an active and demonstrable digital presence focused on AI and Data, evidenced by a meaningful combination of the following:
- Published technical blogs, articles, or newsletters on AI/ML and data topics (Medium, Substack, personal blog, LinkedIn articles).
- Active conference speaking — talks, panels, or workshops at industry events, meetups, or webinars.
- Open-source contributions or publicly available projects on GitHub related to AI/ML.
- Strong LinkedIn or X (Twitter) presence with regular, substantive posts on AI trends, techniques, and industry perspectives.
- YouTube, podcast, or video content on AI topics.
RequirementsRequired Skills & ExperienceBackground & Experience6–12 years of experience with a strong foundation in data engineering or data science, combined with solid software engineering skills across frontend, backend, and DevOps.Minimum 1 year of dedicated AI presales experience — designing, prototyping, and demonstrating AI solutions as part of deal pursuits, client workshops, or strategic presales engagements. Delivery-only AI experience does not qualify.Proven ability to rapidly prototype end-to-end AI/ML solutions — from data exploration and feature engineering through model development, deployment, and interactive demos.Hands-on experience across at least two of three core platforms: Databricks, Snowflake, and Google Cloud (BigQuery, Vertex AI, Dataflow).Technical SkillsApplied AI Engineering: Deep hands-on knowledge of GenAI (LLMs, RAG, prompt engineering, fine-tuning, AI agents, structured outputs, guardrails), classical ML, NLP, and computer vision. Proficiency with frameworks including PyTorch, TensorFlow, Hugging Face, LangChain, LlamaIndex, and OpenAI APIs.Data Science: EDA, statistical modelling, hypothesis testing, feature engineering, and rapid prototyping. Proficiency in Python and SQL is essential; R is a plus.Data Engineering: Pipeline design, ETL/ELT, streaming architectures, data modelling, and feature store design. Experience with Spark, Kafka, dbt, Airflow, or equivalent.Software Development: Full-stack capability — Python/FastAPI backend, React/Next.js frontend, clean architecture, TDD. The candidate must write production-grade, testable, deployable code — not just notebook-level prototypes.DevOps & MLOps/LLMOps: Docker, Terraform, CI/CD, experiment tracking (MLflow, W&B), evaluation harnesses (RAGAS, BLEU/ROUGE), tracing (OpenTelemetry), monitoring (Prometheus/Grafana), and red-teaming.Thought Leadership & Digital PresenceEstablished and verifiable digital presence focused on AI is mandatory. Candidates must provide links to their published work, profiles, or portfolios as part of the application.Demonstrated ability to simplify complex AI topics for business and technical audiences.Communication & Presales SkillsExcellent communication — ability to present to CxO stakeholders, lead workshops, run demos, and write compelling proposals.Strong problem framing and consulting skills — can translate ambiguous client needs into structured solution approaches.Experience creating solution architectures, technical decks, and effort estimations for large deals ($5M+).Comfort working with Americas clients across time zones (IST evening overlap required).Preferred QualificationsCertifications in Databricks (ML Associate/Professional), Google Cloud (Professional ML Engineer / Data Engineer), Snowflake (SnowPro), or cloud-native AI certifications.Published speaker at recognised industry conferences (Data + AI Summit, Google Cloud Next, Snowflake Summit, PyCon, or equivalent).Active open-source contributor with a visible GitHub profile.Experience building and contributing to internal accelerator platforms (agent frameworks, use case factories, evaluation suites).Familiarity with AI safety, responsible AI frameworks, and AI governance.Exposure to telecom, BFSI, or enterprise verticals.Experience with Japanese enterprise clients or cross-cultural engagement is a significant plus.