We are recruiting a Data Engineer to join our Technologies Department, responsible for developing, optimizing, and managing data flows for internal clients. The role involves developing data ingestion pipelines, designing data transformation pipelines, and applying good software engineering practices.
Requirements
- Degree in computer science (Bac + 5), engineering or related field.
- At least 3 years of experience in a similar position.
- Master's degree in one or more languages: Python, PySpark, Scala, SQL
- Confirmed experience in developing data pipelines on Databricks
- Good knowledge of the Spark UI with the ability to diagnose and optimize treatments
- Good knowledge of the Azure environment and its cloud-oriented data services
- Familiarity with good development practices: clean code, versioning, testing, and BI modeling principles
- Experience with the dbt framework for data transformation in SQL
- Practicing Agile methodology, closely with Product Owners Data and the project team
Benefits
- Flexible working environment
- Attractive benefits
