Remote - Data Engineering Analyst
World Business Lenders View all jobs
- New Delhi
- Contract
- Full-time
- Please note that all resumes should be submitted in English.
- Improved data pipelines (via automation) with quality checks to ensure data accuracy, consistency, and reliability throughout the entire data processing workflow, reducing manual intervention and minimizing errors.
- Migration of legacy systems to new systems, involving thorough analysis, seamless data transfer, and integration to enhance system performance and maintain continuity without disrupting existing operations.
- A Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field is preferred — but we also highly value equivalent hands-on experience!
- 4 to 7 years of experience as a Data Engineer.
- We value practical, hands-on expertise that goes beyond just theoretical knowledge.
- Experience working with cloud-based data platforms and environments
- Strong SQL skills with data modeling experience for data warehouses
- Strong Python skills (especially notebooks) for building and maintaining data workflows
- Experience developing and maintaining ETL/ELT processes
- Experience working with data warehouses used by reporting/business teams
- Experience using Git/GitHub for version control in collaborative environments
- Understanding of data engineering best practices:
- Pipeline orchestration and dependency management
- Data quality, validation, and monitoring fundamentals
- Hands-on data migration experience:
- Schema mapping and transformation
- Data reconciliation and validation (must be strong here)
- Data quality checks, integrity validation, and issue resolution
- Backfills and historical data handling
- Cutover planning and execution
- Experience building and maintaining automated data pipelines:
- Scheduling, orchestration, and failure handling
- Workflow reliability, monitoring, and repeatability
- Experience within Microsoft Fabric or broader Microsoft Azure data ecosystem
- Experience with orchestration tools such as Apache Airflow, Azure Data Factory, or Fabric pipelines
- Experience with data quality and observability practices:
- Validation frameworks, alerting, SLAs, monitoring
- Experience optimizing performance in data environments:
- Query tuning, partitioning, indexing, cost optimization
- Familiarity with modern data formats and large-scale processing:
- Parquet, Delta, incremental processing patterns
- Experience integrating with external systems:
- REST APIs, authentication, retries, error handling
- Exposure to CI/CD practices for data workflows:
- Git-based deployments, PR workflows, environment promotion
- Clear communication: can explain data issues, tradeoffs, and results to both technical and business stakeholders.
- Ownership & accountability: takes end-to-end responsibility for pipelines, data quality, and outcomes.
- Problem-solving mindset: able to debug complex data issues and work through ambiguity independently.
- Collaboration: works effectively across engineering, analytics, and business teams.
- Adaptability: comfortable operating in evolving environments (migration, changing requirements, new tools).
- Proficiency in SQL, Python, cloud technologies, and data competency.
- USD Base Salary.
- Enjoy Paid Time Off (PTO) after just 6 months of service.
- Full-time opportunity.
- Enjoy the freedom of a completely remote work environment!