Role: Data Engineer (Transport Focus)
Seniority: Regular, Regular+, or Senior
Client:
A technology company building and optimising a highly scalable Data Mesh that powers cross-product use cases across their platform.
Client Location: Ireland
Work Location: Poland
Project Description:
Development and optimisation of a Data Mesh supporting use cases across various products within the client’s platform. The work involves building and maintaining large-scale data processing infrastructure and data transformation logic.
This is a critical, hands-on engineering role focused on building and optimising ahighly scalable and reliable Data Mesh that powers cross-product use cases across the platform. The engineer will build and support large-scale data processing infrastructure and transformation logic.
Tech Stack:
Core Technologies:
- Apache Spark – deep, proven experience required, including:
- Deployment, management, and cost optimisation of Spark clusters
- Writing high-performance Spark ETL jobs
- Ideally, experience writing to large analytical tables (e.g., Apache
- Iceberg)
- Infrastructure as Code (IaC): Terraform, experience with AWS
- CI/CD: Jenkins
- Testing: Unit, Integration, and End-to-End testing
Team Structure:
The overall project is quite extensive; however, this particular team consists of 6 FTEs:
• 3x Data Engineers
• 1x Data Operator
• 2x Python Developers
Working Hours: EU timezone
Language Level:
Excellent verbal and written communication skills required – ability to clearly communicate technical decisions to Project Managers and Operational Teams.
Additional Information:
• Background check required
Key candidate traits:
- Exceptional analytical and problem-solving skills
- Ability to debug complex cross-service issues
- Strong adherence to Clean Code principles
