Data Engineer
Synoptek View all jobs
- Ahmedabad, Gujarat
- Permanent
- Full-time
- Design and implement end-to-end data pipelines utilizing Azure services, including Data Factory, Databricks, Synapse Analytics, and Event Hubs, to ingest, process, and transform data from various sources (e.g., Oracle Mercury Gate, D365, NetSuite, telematics systems, sensors).
- Leverage Kafka and StreamSets to ensure real-time data ingestion and low latency processing for critical operational insights.
- Develop and maintain data models that optimize data storage, retrieval, and performance within Azure Data Lake Storage and Snowflake.
- Write, optimize, and troubleshoot SQL queries within the Snowflake environment.
- Collaborate with business stakeholders to understand data requirements and translate them into actionable data solutions.
- Implement data governance best practices to ensure data quality, consistency, and compliance with industry regulations.
- Monitor and optimize data pipelines for performance, scalability, and cost-effectiveness.
- Develop and maintain data quality checks and data cleansing routines to ensure data integrity throughout the data lifecycle.
- Develop reports and visualization using Tableau to communicate data insights effectively to technical and non-technical audiences.
- Stay current with the latest advancements in Azure data services, data management practices, and transportation & logistics data solutions.
- 4+ years of experience as a Data Engineer with a strong focus on Microsoft Azure data technologies.
- Proven experience with data pipelines, data ingestion, transformation, and loading (ETL/ELT) processes.
- In-depth knowledge of data architecture, data modeling, and data governance principles.
- Proficiency in SQL Server, MySQL, PostgreSQL, and NoSQL databases (DynamoDB, MongoDB).
- 2+ Years of experience with Snowflake data warehouse
- Experience with Python for data manipulation and automation.
- Expertise in cloud-based data warehousing solutions like Snowflake.
- Working knowledge of data streaming technologies like Kafka and StreamSets.
- Familiarity with data visualization tools (Tableau, Power BI, Qlik Sense) a plus.
- Experience working in the transportation and logistics domain is highly preferred.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Strong analytical and problem-solving skills with a passion for data-driven solutions.
- Cloud Platforms: Azure, AWS,
- Data warehouse: Snowflake
- Data Engineering Tools: Kafka, StreamSets, Databricks, Spark
- Data Management: ETL/ELT, Data Architecture, Data Modeling, Data Governance
- Programming languages: Python, Java, or Scala.
- Databases: SQL Server, MySQL, PostgreSQL, DynamoDB, MongoDB etc.
- Bachelor’s degree in Computer Science, Information Technology or related field from an accredited college or university preferred
- In lieu of undergraduate degree, the ratio is 1:1 - meaning one year of college equals one year of work experience and vice versa
- Customarily has at least 1 year(s) of job-related experience
- Familiarity with acquiring and managing data from various sources, including primary and secondary data sources.
- Synoptek core DNA behaviors:
- Clarity: Possesses excellent communication skills, makes a concentrated effort to speak the customer’s language. Ability to field questions with concise, well-constructed responses
- OwnIT: Shows integrity, innovation, and accountability in completing daily assignments
- Results: Solutions focused and driven to resolve conflict quickly and precisely. Proactively looks for opportunities to contribute to the company’s business goals
- Growth: Willing to learn and ask questions. Constantly looking for new ways to improve yourself. Ability to adapt and grow in a fast-paced environment
- Team: Embraces both customers and colleagues as team members. Ability to be flexible, respectful, engaged and collaborative