Senior Data Science Engineer
Adobe View all jobs
- Noida, Uttar Pradesh
- Permanent
- Full-time
- Design, build, and maintain large-scale, production-grade data pipelines on Azure Databricks using Apache Spark and Python
- Write and optimize complex, high-performance SQL for data transformation, aggregation, and analytics workloads at scale
- Develop and maintain analytics-ready data models (fact tables, dimensions, rollups, metric layers, and gold layer tables) used for dashboards and reporting
- Optimize Databricks workloads for performance, reliability, and cost efficiency, including tuning Spark jobs and Delta Lake tables
- Establish and apply Delta Lake best practices, including incremental and idempotent processing, MERGE patterns, partitioning, and table optimization.
- Partner closely with product analysts, business analysts, data scientists, and product managers to enable reliable, self-serve analytics, supporting product-led growth use cases.
- Implement data quality checks, validation frameworks, and monitoring to ensure accurate and trusted analytics metrics
- Apply strong Data engineering best practices, including version control and documentation
- Contribute to solutions that integrate structured and unstructured data, including selective use of GenAI / LLM-based capabilities where relevant
- Bachelor’s degree or higher in Computer Science, Engineering, or a related field
- 6–12 years of professional experience in Data Engineering
- Proven experience building and supporting production-grade analytical data pipelines at scale
- Expert-level SQL skills, including complex joins, window functions, performance tuning, and large-scale aggregations
- Advanced Python proficiency for data processing, pipeline development, and automation
- Deep hands-on experience with Azure Databricks and Apache Spark
- Strong understanding of Delta Lake and optimization techniques (partitioning, Z-ordering, compaction)
- Experience designing data models optimized for analytics and BI consumption
- Strong experience with Microsoft Azure, including data lake storage and access controls
- Familiarity with lakehouse architectures and enterprise data governance concepts
- Experience with streaming or near–real-time data pipelines on Databricks
- Prior experience supporting product analytics, feature adoption, or MAU-based metrics
- Exposure to MLOps, LLM deployment, or GenAI-enabled data applications
- Familiarity with BI tools such as Tableau or Power BI and their performance considerations
- Experience mentoring analysts or junior data engineers
Our interviews are designed to reflect your own skills and thinking. The use of AI or recording tools during live interviews is not permitted unless explicitly invited by the interviewer or approved in advance as part of a reasonable accommodation. If these tools are used inappropriately or in a way that misrepresents your work, your application may not move forward in the process.At Adobe, we empower employees to innovate with AI — and we look for candidates eager to do the same. As part of the hiring experience, we provide clear guidance on where AI is encouraged during the process and where it’s restricted during live interviews. See how we think about .