
Principal/ Senior Solution Engineer
- Hyderabad, Telangana
- Permanent
- Full-time
- Collaborate with core engineering, customers, and solution engineering teams for functional and technical discovery sessions to understand requirements and architect TigerGraph-based solutions.
- Prepare and deliver compelling product demonstrations, live software prototypes, and proof-of-concepts showcasing a leading Graph Database Product's multi-modal Graph + Vector capabilities, including hybrid search for AI applications.
- Create and maintain public documentation, internal knowledge base articles, FAQs, and best practices for a leading Graph Database product's implementations.
- Design efficient graph schemas, develop GSQL queries and algorithms, and build prototypes that address customer requirements (e.g., Fraud Detection, Recommendation Engines, Knowledge Graphs, Entity Resolution, Anti-Money Laundering, and Cybersecurity).
- Optimize indexing strategies, partitioning, and query performance in a leading Graph Database Product's distributed environment, leveraging GSQL for parallel processing and real-time analytics.
- Lead large-scale production implementations of a leading Graph Database Product's solutions for enterprise clients, ensuring seamless integration with existing systems like Kafka for streaming, K8s for orchestration, and cloud platforms.
- Provide expert guidance on Graph Neural Networks (GNNs), Retrieval-Augmented Generation (RAG), semantic search, and AI-driven optimizations to enhance customer outcomes.
- Troubleshoot complex issues in distributed systems, including networking, load balancing, and performance monitoring.
- Foster cross-functional collaboration, including data modeling sessions, whiteboarding architectures, and stakeholder management to validate solutions.
- Drive customer success through exceptional service, project management, and communication of leading Graph Database Product's value in AI/enterprise use cases.
- Graph and Vector Data Science: Experience in applying graph algorithms, vector embeddings, and data science techniques for enterprise analytics.
- SQL Expertise: Experience in SQL for querying, performance tuning, and debugging in relational and graph contexts.
- Graph Databases and Platforms: Experience with Graph Database Products like Tigergraph, Neo4J, Janusgraph or similar systems, focusing on multi-modal graph + vector integrations.
- Programming & Scripting: Experience in Python, C++, and automation tools for task management, issue resolution, and GSQL development.
- HTTP/REST and APIs: Expertise in building and integrating RESTful services for database interactions.
- Linux and Systems: Strong background in Linux administration, scripting (bash/Python), and distributed environments.
- Kafka and Streaming: Experience with Kafka for real-time data ingestion and event-driven architectures.
- Cloud Computing: Experience with AWS, Azure, or GCP for virtualization, deployments, and hybrid setups.
- Graph Neural Networks (GNNs) and Graph Machine Learning**: Hands-on with frameworks like PyTorch Geometric for predictive analytics on graphs.
- Retrieval-Augmented Generation (RAG) and Semantic Search: Building pipelines with vector embeddings and LLMs for AI applications.
- Multimodal Data Handling: Managing text, images, video in graph + vector setups.
- Agile Methodologies and Tools: 3+ years with Scrum/Agile, JIRA, or Confluence.
- Presentation and Technical Communication: Advanced whiteboarding, architecture reviews, and demos.
- Cross-Functional Collaboration: Leading discovery, data modeling (UML, ER diagrams), and on-call incident management.
- Data Governance, Security, and Compliance: Knowledge of encryption, access controls, GDPR/HIPAA, and ethical AI practices.
- Big Data Processing Tools: Proficiency in Apache Spark, Hadoop, or Flink for distributed workloads.
- AI-Driven Database Management and Optimization: Skills in AI-enhanced query optimization and performance tuning.
- Monitoring & Observability Tools: 4+ years with Prometheus, Grafana, Datadog, or ELK Stack.
- Networking & Load Balancing: Proficient in TCP/IP, load balancers (NGINX, HAProxy), and troubleshooting.
- K8s (Kubernetes): Proficiency in container orchestration for scalable deployments.
- DevOps and CI/CD Pipelines: Advanced use of Git, Jenkins, or ArgoCD for automation.
- Real-Time Analytics and Streaming Integration: Beyond Kafka, experience with Flink or Pulsar.