We're looking for a seasoned technical leader and Principal Scala Engineer to drive the modernization and evolution of our core data processing platforms. This role is central to re-architecting high-traffic, mission-critical data pipelines, moving them to a more robust, scalable, and efficient future on the Google Cloud Platform (GCP).
You will be instrumental in defining the technical strategy for our Scala-based Dataflow processing, ensuring reliability and performance at scale. This is a hands-on leadership role for someone who thrives on solving complex distributed systems problems, confidently leading a group of engineers, and mentoring a team to deliver excellence.
- Lead the architectural design and technical roadmap for modernizing and scaling our data ingestion and processing pipelines, primarily using Scala and GCP Dataflow (Apache Beam).
- Drive performance and throughput enhancements, such as migrating services to use Cloud Functions for exporting data from BigQuery to Pub/Sub more efficiently than existing Dataflow jobs.
- Partner with Staff Engineers to expand the functionality of our data export systems for external partners. This involves pulling data from BigQuery, creating formatted feed files, and exporting them to various external stores (e.g., FTP, GCS, S3).
- Oversee the support and expansion of existing data integrations, including adding new fields to schemas and ensuring that data is propagated correctly through the entire pipeline.
- Drive the refactoring and modernization of legacy Scala codebases, introducing best practices in functional programming, testing, and observability.
- Collaborate closely with teams using Node.js APIs (for data ingestion) and Python (for data transformations), defining clear data contracts and robust integration patterns.
- Mentor and guide a team of engineers, championing engineering best practices, conducting code reviews, and fostering a culture of high-quality, collaborative development.
- Act as a key technical communicator, defining approaches, breaking down complex problems, and engaging in rapid, iterative development cycles by asking questions quickly rather than working in isolation.
This role is designed for impact, and we believe our best work happens when we connect. While we operate a flexible model, we expect you to spend time on site (at our offices or a client location) for collaboration sessions, customer meetings, and internal workshops.
Requirements
- Degree in Computer Science or a related technical discipline.
- 8 + Extensive experience of software engineering, with experience, with a proven track record of designing, building, and operating large-scale, high-traffic distributed systems in production.
- Deep expertise in Scala and its functional programming paradigms.
- Demonstrable, hands-on experience with GCP Dataflow (Apache Beam). Experience with other major streaming/batch frameworks (e.g., Apache Spark, Akka Streams) is also highly valuable.
- Strong proficiency in the GCP ecosystem, including critical services like BigQuery, Pub/Sub, and Cloud Functions.
- Solid understanding of data engineering principles, including data modeling, schema design, and data lifecycle management.
- Experience building and maintaining data export feeds to external systems (e.g., SFTP, GCS, S3).
- Proven ability to lead technical projects, mentor other engineers, and influence architectural decisions across multiple teams.
- Excellent communication skills, with experience working in polyglot environments (interfacing with Node.js, Python, etc.) and a proactive, inquisitive approach to problem-solving.
- Ability to be available for key synchronous meetings and standups between 4:00 - 6:00 PM UK time.
Nice to Have
- Familiarity with CI/CD best practices and infrastructure-as-code tools (e.g., Terraform, Docker, Kubernetes).
- Working knowledge of Node.js or Python for data-related tasks.
- Experience with other data stores (e.g., NoSQL, time-series databases) or data orchestration tools (e.g., Airflow).
Benefits
Compensation and Financial Well-being
- Annual salary reviews.
- We follow LAS (Employment Protection Act) regarding termination of employment, insurance requirements, etc.
- Occupational pension plan for all employees with a significantly higher contribution than standard practice and no age limit.
- All employees are provided with a work setup that includes a computer and a mobile phone.
Health and Wellness
- We provide a generous wellness allowance or a membership at TIQQE Crossfit.
- We include premium waiver insurance, health insurance, group life insurance (TGL), work injury insurance (TFA), accident insurance, healthcare insurance, rehabilitation insurance, business travel insurance, and liability insurance.
- We offer advice on employment benefits and insurance coverage.
- We offer visual display terminal glasses if needed.
Culture and Environment
- Central office in Örebro but also we also offer distributed (remote) work.
- Regular employee surveys.
Time Off and Well-being
- 5 extra vacation days per year in addition to the statutory 25 days (therefore all employees have 30 vacation days per year).
- We do not work on bridging days (days between a public holiday and a weekend).
- Flexible working hours.
Growth and Development
- Continuous development talks.
- Certification bonus.
- Leadership programs and opportunities for further personal development
Diversity and Inclusion
At Qodea, we champion diversity and inclusion. We believe that a career in IT should be open to everyone, regardless of race, ethnicity, gender, age, sexual orientation, disability, or neurotype. We value the unique talents and perspectives that each individual brings to our team, and we strive to create a fair and accessible hiring process for all.
