
Data Engineer
- Gurgaon, Haryana Jharkhand
- Permanent
- Full-time
- Programming: Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala.
- Big Data Technologies: Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming.
- Cloud Platforms: Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure.
- Data Warehousing & Modeling: Strong understanding of data warehousing concepts and data modeling principles.
- ETL Frameworks: Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks.
- Data Lakes & Storage: Proficiency in working with data lakes and cloudbased storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage.
- Version Control: Expertise in Git for version control and collaborative coding.
- Performance Optimization: Expertise in performance tuning for largescale data processing, including partitioning, indexing, and query optimization.
- Bachelor s Degree: Bachelor s degree in Computer Science, Information Technology, or equivalent experience.
- Professional Experience: 1 5 years of experience in data engineering, ETL development, or database management.
- Cloud-Based Environments: Prior experience in cloud-based environments (e.g., AWS, GCP, Azure) is highly desirable.
- Large-Scale Datasets: Proven experience working with large-scale datasets in production environments, with a focus on performance tuning and optimization.