
Underwriting Data Architect
- Bangalore, Karnataka
- Permanent
- Full-time
- Design future-ready, cloud-native data architectures that scale with business growth and enable real-time insight delivery.
- Build the architectural foundations that empower Underwriting, Portfolio Management, Actuarial, and Digital teams with timely, trustworthy data.
- Operationalize the shift from siloed, ad hoc pipelines to a shared Analytical Layer powered by reusable data products and governed metadata.
- Define scalable, secure enterprise architecture aligned with strategic growth priorities.
- Own and evolve our conceptual, logical, and physical data models across domains.
- Align architecture with underwriting use cases and analytics applications across business lines.
- Set best practices for cloud-native modeling, metadata, reference data, and lineage design.
- Define standardized blueprints for data lakes, warehouses, marts, and the Analytical Layer.
- Support master data management and high-reliability design for mission-critical flows.
- Collaborate with engineering teams to develop performant ETL/ELT pipelines and APIs.
- Ensure that architectural patterns allow for modular reuse, auto scaling, and operational monitoring.
- Contribute to CI/CD practices, data validation automation, and integration playbooks.
- Partner with Data Governance team to embed stewardship, metadata, and controls into architecture.
- Ensure designs support lineage, audit, and regulatory requirements (e.g., GDPR compliance).
- Participate in architecture review boards and provide strategic recommendations.
- Evaluate emerging technologies and lead POCs for architectural innovation.
- Serve as technical advisor on system modernization and data platform evolution.
- Champion a product-oriented mindset to data-building for discoverability, trust, and reuse. About
- Master's degree in Computer Science, Data Science, or other rigorous technical discipline.
- 5+ years of experience in data architecture, data engineering, or solution architecture within global, matrixed organizations.
- Deep expertise in data modeling (dimensional, canonical, normalized) and reference/metadata frameworks.
- Hands-on experience designing and implementing data pipelines and warehouse/lake architectures with proficiency in programming languages (eg. Python, SQL, Spark etc.)
- Strong proficiency in Azure, Snowflake, Databricks, or similar cloud platforms.
- Working knowledge of data governance practices, quality metrics, and stewardship integration.
- Track record of supporting regulated environments (e.g., GDPR) and enforcing privacy-by-design.
- Exceptional communication and stakeholder alignment skills.
- Proven ability to balance long-term vision with near-term deliverables under tight timelines.
Reference Code: 134421