Design, develop, and maintain scalable data solutions in the cloud using Apache Spark and Databricks. Gather data requirements, build data pipelines, and ensure data quality and security. Collaborate with cross-functional teams and provide technical guidance to junior engineers.
Requirements
- 8+ years of experience as a Data Engineer, Software Engineer, or similar role
- Strong knowledge and experience with Azure cloud platform, Databricks, EventHub, Architecture, Spark, Kafka, ETL Pipeline, Python/Pyspark, SQL
- Proficiency in Apache Spark and Databricks for large-scale data processing and analytics
- Experience in designing and implementing data processing pipelines using Spark and Databricks
- Strong knowledge of SQL and experience with relational and NoSQL databases
- Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services
- Good understanding of data modelling and schema design principles
- Experience with data governance and compliance frameworks
- Excellent problem-solving and troubleshooting skills
- Strong communication and collaboration skills to work effectively in a cross-functional team
Benefits
- Bachelor’s or master’s degree in computer science, engineering, or a related field