Data Engineer

FM India

  • Bangalore, Karnataka
  • Permanent
  • Full-time
  • 5 hours ago
  • Apply easily
Job DescriptionAbout us:We are a highly successful 190-year-old, Fortune 500 commercial property insurance company of 6,000+ employees with a unique focus on science and risk engineering. Businesses worldwide trust our expertise to protect their assets, relying on our comprehensive risk assessments and robust, engineering-based insurance solutions to safeguard against fire, natural disasters, and other perils. Serving over a quarter of the Fortune 500 and major corporations globally, we deliver data-driven strategies that enhance resilience, ensure business continuity, and empower organizations to thrive.FM India is a strategic location for driving our global operational efficiency. Our presence in India allows us to leverage the countrys talented workforce and advance our capabilities to serve our clients better. We have diverse corporate functions that emphasize research, advanced technologies like AI and analytics, risk engineering, research, finance, marketing, HR, etc. working together to provide innovative solutions and nurture lasting relationships from co-workers to clients.Role Title: Data EngineerPosition Summary:The Data Engineer is a key role on the Data Analytics team. The Data Engineer is responsible for developing, maintaining, and supporting data pipelines, data models, and data assets that enable analytics, reporting, and downstream data consumption. This role focuses on implementing well defined data solutions using approved technologies, following established standards for data quality, security, and reliability. The Data Engineer is responsible for creating and managing data infrastructure, data pipeline design, implementation and data verification. Along with the team, the Data Engineer is responsible for ensuring the highest standards of data quality, security and compliance. Additionally, the Data Engineer will implement methods to improve data reliability and quality, combine raw information from different sources to create consistent data sets. Incumbents are learning relevant technologies. Displays personal accountability for successful outcomes and support quality efforts within the team. Receptive and responsive to mentoring from more senior team members on design and development techniques, enhancements, or support efforts. The Data Engineer will need to become versed in DataOps, especially where Agile development, DevOps, and Continuous Improvement intersect. The Data Engineer I is the entry level position in the Data Engineer job family. Those holding this position are typically assigned to work as part of a project team.Job Responsibilities:Data Engineering & Development:
  • Develop working knowledge of structured data sources within each product journey (Underwriting and Risk; Client Service, Sales and Marketing; Claims; Account and Location Engineering).
  • Partner with Data Analytics team members, developers, solution architects, business analysts, data engineers, data analysts, data scientists to understand data and reporting needs.
  • Create and use data models as a means toward developing and documenting code.
  • Learn technologies such as Fabric Platform, Synapse Analytics Platform, Azure services, Kafka and others as required
  • Validate code through detailed and disciplined testing.
  • Participate in peer code review to ensure solutions are accurate .
  • Ensure tables and views are designed for data integrity, efficiency and performance, and are easy to comprehend.
Move and Store Data:
  • Contribute to the design, development, and maintenance of data pipelines and integrations for structured and unstructured data sources across enterprise product domains (e.g., Underwriting, Risk, Claims, Client Services, Sales and Marketing), working under the guidance of senior engineers.
  • Develop and support ETL/ELT solutions using approved cloud and analytics platforms (e.g., Synapse, SQL Server, Azure services such as Data Factory, Fabric, and data lakes), following established architectures, standards, and best practices.
  • Apply standard testing practices and participate in peer reviews to help ensure data solutions meet expectations for accuracy, performance, scalability, and maintainability, with coaching and feedback from more experienced team members.
  • Implement defined data flows and infrastructure pipelines to support the movement and storage of structured and unstructured data into and out of Data Analytics data assets, adhering to documented designs and operational procedures.
  • Design data models and data flows into and out of Data Analytics databases.
  • Understand and design data relationships between business and data subject areas.
  • Follow standards for naming conventions, code documentation and code review.
Data Quality, Reliability & Operations:
  • Investigate reported data quality issues, helping identify root causes and assisting with fixes under guidance from senior team members.
  • Assist with data profiling, cleansing, and transformation tasks in collaboration with analytics and business partners.
  • Monitor data pipelines and systems for basic performance and reliability issues, escalating concerns and supporting optimization efforts.
  • Help resolve production issues by following established validation, testing, and deployment procedures; provide routine production support.
  • Work with DBAs and teammates to support warehouse configuration, maintenance, and day to day operations.
  • Learn and apply foundational data modeling concepts, including understanding relationships between business and data subject areas.
  • Follow established standards for naming conventions, documentation, version control, and code review.
Collaboration & Delivery:
  • Collaborate with product owners, solution architects, business analysts, data analysts, data scientists, and other engineers to understand basic data and reporting requirements and help translate them into technical tasks.
  • Assist with data exploration and transformation activities under guidance from senior team members.
  • Support data cleansing efforts by helping identify and correct data issues.
  • Perform data profiling to help surface data anomalies and quality concerns.
  • Participate in hybrid agile delivery practices, including backlog refinement, task estimation, prioritization, and raising potential conflicts or risks.
  • Assist with data preparation tasks for analytics, reporting, and modeling.
  • Help support users and production data applications by responding to questions and troubleshooting issues.
  • Support developers, data analysts, and data scientists who need to access and work with data in the data warehouse.
  • Analyze reported data quality issues and assist in identifying potential root causes.
  • Monitor system performance and escalate opportunities for optimization to senior engineers.
  • Help monitor storage capacity and system reliability.
  • Assist in resolving production issues by following established validation, testing, and deployment processes.
  • Communicate clearly and professionally with users, teammates, and management.
  • Provide ad hoc data extracts and basic analysis to support tactical business needs.
Participate in effective execution of team priorities:
  • Assist in identifying assigned work and documenting tasks in the team backlog, following established intake processes.
  • Maintain and organize assigned tasks according to provided priorities and guidance from senior team members or leads.
  • Communicate potential conflicts or competing priorities to a lead or manager in a timely manner.
  • Support production data processes by helping investigate issues, performing routine checks, and escalating problems as needed.
  • Collaborate with product and analytics partners to stay informed of documented database or business process changes that may impact data interpretation, under guidance from the team.
Skill and Experience:
  • 1 - 2 year of experience required to perform essential job functions.
  • Working knowledge of SQL and relational database concepts, including basic normalization and dimensional modeling.
  • Experience supporting or building data warehouses and/or data lakes.
  • Familiarity with cloud-based data platforms and services (e.g., Fabric, Synapse Analytics, Azure services, Kafka).
  • Programming experience using SQL and Python; exposure to structured and semi-structured data formats (e.g., JSON).
  • Foundational understanding of DataOps concepts such as testing, deployment, monitoring, and operational support.
  • Relational and nonrelational database theory.
  • Knowledge of Azure Cloud applications.
  • ETL/ELT pipeline design and build.
  • Programming languages SQL, Python, Spark, Kafka, Azure, JSON.
Must Have Skills:
  • SQL.
  • Python, Spark, Data lakes, Data warehouses.
  • Fabric Platform.
  • Ability to read and create data models.
  • ETL/ELT Pipeline development experience.
Good to Have Skills:
  • Preferred: Kafka, Synapse Analytics Platform.
  • Relational (3NF) and non-relational (Inmon/Kimball) database theory.
  • AI/LLM knowledge, experience preferred.
  • Python.

FM India

Similar Jobs

  • Senior Data Engineer - Voice

    Ferguson

    • Bangalore, Karnataka
    About Ferguson Ferguson is the largest value-added distributor serving the specialized professional in the residential and non-residential North American construction market. We …
    • 1 day ago
    • Apply easily
  • Senior Analyst- Data Visualization

    MUFG

    • Bangalore, Karnataka
    About MUFG Global Service (MGS) MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning in more than 40 markets. Outside of Japan, the bank offers an extensive sc…
    • 1 day ago
    • Apply easily
  • AVP-Data Visualization

    MUFG

    • Bangalore, Karnataka
    About MUFG Global Service (MGS) MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning in more than 40 markets. Outside of Japan, the bank offers an extensive sc…
    • 1 day ago
    • Apply easily