
Data Engineer
- Bangalore, Karnataka
- Permanent
- Full-time
- Drive design, implementation, testing and release of products
- Build big data pipelines and analytics infrastructure on Azure with Data Factory, Databricks, Event Hub, Data Explorer, Cosmos DB and Azure RDBMS platforms
- Build secure networking and reliable infrastructure for High Availability and Disaster Recovery
- Build big data streaming solutions with 100s of concurrent publishers and subscribers
- Collaborate closely with Product, Design, and Engineering teams to build new features
- Participate in an Agile environment using Scrum software development practices, code review, automated unit testing, end-to-end testing, continuous integration, and deployment
- Think about how to solve problems at scale and build fault-tolerant systems that leverage telemetry and metrics
- Investigate, fix, and maintain code as needed for production issues
- Operate high reliability, high uptime service and participate in on-call rotation
- BS degree in Computer Science, Engineering or equivalent
- 5+ years of experience within a software engineering related field
- 2+ years experience with Azure Data services , Azure Data Explorer, Azure Data Factory, Databricks, Event-hubs
- 2+ years experience with Data warehouse/ data modeling with RDBMS like SQL Server
- Proficiency in cloud platform deployment with Azure ARM templates or Terraform
- Experience using Git or other version control systems and CI-CD systems
- Focus on writing high quality code that is easy to be maintained by others
- Strong understanding and experience in agile methodologies
- Experience with Cosmos DB or No-SQL platforms
- Experience building large data lakes and data warehouses
- Proficiency in modern server- side development using modern programming languages like C# or others
- Building cloud apps on Azure
- Strong interest or documented experience in large scale microservice architectures on Kubernetes
- Proficiency in big data processing in Apache Spark with Python or Scala
- Proficiency in data streaming applications with Event Hub/Kafka Spark streaming
- Proficiency in data pipeline orchestration with Data Factory or similar
- A track record of being a self-starter - Individual/team responsibility is our main driver in the development work