Business Intelligence Data Engineer
Model N View all jobs
- Hyderabad, Telangana
- Permanent
- Full-time
- Architect and Build Pipelines: Design, develop, and maintain automated ETL/ELT pipelines to ingest data from diverse sources (ERP, CRM, Billing systems).
- Data Modeling: Design and implement scalable data models (Star Schema, Data Vault, or OBT) that support complex financial reporting, ensuring high performance and data integrity.
- Workflow Orchestration: Lead the transition from legacy manual processes to robust, automated pipelines. Use Python and AWS native orchestration to engineer scalable infrastructure that powers high-availability data products.
- Optimization & Scaling: Continuously improve data ingestion throughput query performance to handle increasing volumes of Financial, Sales, and Marketing data.
- Data Governance & Quality: Implement custom Python-based validation frameworks and CloudWatch monitoring to ensure gold-standard accuracy for financial metrics like ARR, NRR, and Churn.
- Cross-Functional Collaboration: Partner with BI Analysts and functional teams to translate business requirements into technical data specifications and architectural designs.
- DevOps Integration: Maintain and promote code quality through version control (Git), CI/CD pipelines, and rigorous documentation of the data lineage.
- Semantic Layer: Institutionalize KPI definitions and metric governance by building a unified semantic layer; ensure data consistency across Finance and GTM systems to eliminate reporting silos and maintain a single source of truth.
- Security & Compliance: Ensure all financial data pipelines adhere to strict security standards, encryption, and access control policies.
- 4+ years of experience in data engineering, backend development, or data architecture.
- Proven track record of building and scaling production-grade data pipelines.
- Experience working cross-functionally to support strategic initiatives.
- Bachelor's degree in computer science, software engineering, or a related technical field. Master's degree in a technical discipline preferred.
- Advanced SQL: Expert-level ability to write complex, performant queries and stored procedures.
- Programming: Strong proficiency in Python for data engineering and API integrations.
- AWS Mastery: Strong hands-on experience building and scaling production-grade pipelines using the AWS stack (S3, Glue, Redshift, Lambda, or Athena).
- Data Architecture: Mastery of data warehousing concepts, dimensional modeling, and Lakehouse architecture.
- Data Pipeline Automation: Proven experience designing and managing complex task dependencies and distributed workflows. Proficiency in using industry-standard orchestration engines to ensure resilient, scalable, and observable data movement.
- BI Support: Expertise in developing robust backend data models to support enterprise reporting. Proficiency in optimizing analytical query performance, managing tabular schemas, and establishing unified metric definitions to ensure data consistency across visualization tools.
- DevOps: Solid experience with Git and an understanding of CI/CD practices for data deployments