Responsibilities
- Develop modern data solutions and architecture for cloud-native data platforms.
- Build cost-effective infrastructure in Databricks and orchestrate workflows using Databricks/ADF.
- Lead data strategy sessions focused on scalability, performance, and flexibility.
- Collaborate with customers to implement solutions for data modernization.
- Create training plans and learning materials to upskill VM associates.
- Build a smart operations framework for DataOps and MLOps.
Requirements
- Should have 14+ years of experience with last 4 years in implementing Cloud native end-to-end Data Solutions in Databricks from ingestion to consumption to support variety of needs such as Modern Data warehouse, BI, Insights and Analytics
- Experience in architecture and implementing End to End Modern Data Solutions using Azure and advanced data processing frameworks like Databricks
- Experience with Databricks, PySpark, and modern data platforms.
- Proficiency in cloud-native architecture and data governance.
- Strong experience in migrating from on-premises to cloud solutions (Spark, Hadoop to Databricks).
- Understanding of Agile/Scrum methodologies.
- Demonstrated knowledge of data warehouse concepts. Strong understanding of cloud-native databases, columnar database architectures.
- Ability to work with Data Engineering teams, Data Management Team, BI and Analytics in a complex development IT environment.
- Good appreciation and at least one implementation experience on processing substrates in Data Engineering - such as ETL Tools, Kafka, ELT techniques
- Data Mesh and Data Products designing, and implementation knowledge will be an added advantage.
