This is a remote position.
Main ResponsibilitiesDesign, build, and monitorETL pipelinesusingAzureandSpark technologies
Implement scalable, cloud-nativedata processing workflowsusingPySpark
Configure and operate core Azure services:Databricks,Azure Data Factory,Azure Data Lake Storage, andAzure Functions
Collaborate closely with analysts, data scientists, and software engineers to deliver robust data solutions
Translate business needs into reliable and secure data products
Ensure data quality, governance, and performance best practices across solutions
Design, build, and monitorETL pipelinesusingAzureandSpark technologies
Implement scalable, cloud-nativedata processing workflowsusingPySpark
Configure and operate core Azure services:Databricks,Azure Data Factory,Azure Data Lake Storage, andAzure Functions
Collaborate closely with analysts, data scientists, and software engineers to deliver robust data solutions
Translate business needs into reliable and secure data products
Ensure data quality, governance, and performance best practices across solutions
Requirements
At least4 yearsof professional experience indata engineering
Proven experience withlarge-scale data processing and transformation pipelines
Hands-on knowledge ofAzureor other majorcloud platforms
Solid coding skills inPython(especially pandas/numpy) andSQL
Familiarity withGit workflowsin a collaborative development setting
Fluency inPolishandEnglish(C1 minimumlevel in English required)
Nice to HaveExperience withLinux/Bash scripting
Familiarity withDockerorKubernetes
Domain experience inretail,financial services,energy, or thepublic sector
Experience withLinux/Bash scripting
Familiarity withDockerorKubernetes
Domain experience inretail,financial services,energy, or thepublic sector
Benefits
- Solid, competitive salary.
- Work in multilingual, multinational and multicultural environment on international projects
- Medical care
