We are looking for a Senior Java Developer to join our growing team and contribute to building a new product focused on Container Risk Analysis.
As an Experienced Backend Engineer, you will play a crucial role in the design, development, and maintenance of our data platform. This platform supports critical data-driven decision-making throughout the organization.
You will ensure smooth data flow and optimize our services by working closely with cross-functional teams, including data scientists, analysts, and other software engineers.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field;
- A minimum of 5 years of professional experience as a Backend Engineer;
- Full proficiency in Java (specifically Spring Boot) and Kafka, with a strong background in developing micro-services architecture and streaming solutions;
- Proficiency with Generative AI (GenAI) and Large Language Model (LLM) tools;
- Proactively utilized and consistently engaged with LLM and AI tools (e.g., Claude) throughout the development lifecycle;
- Expertise in AWS services for data storage, processing, and analytics;
- Demonstrated experience in designing and implementing robust ETL (Extract, Transform, Load) processes for data transformation and integration;
- Good working knowledge of Python;
- Experience with MySQL;
- Exceptional problem-solving skills and meticulous attention to detail;
- Ability to work independently and take ownership of tasks;
- Good command of English, both written and verbal.
Nice to Have:
- Familiarity with data processing technologies like Spark;
- Experience with SingleStore.
Responsibilities:
- Utilize Java, including Spring Boot, to build robust and high-performance data processing services within our data platform;
- Implement real-time data streaming solutions using Kafka, ensuring timely data ingestion and availability;
- Collaborate closely with cross-functional teams to comprehend data requirements, identify opportunities for data optimization, and support data-driven initiatives;
- Lead the design, development, and maintenance of efficient and scalable data pipelines, facilitating data collection, processing, and transformation from diverse sources;
- Leverage AWS services for data storage, processing, and analytics, adhering to security and performance best practices;
- Monitor and troubleshoot service performance, proactively identifying bottlenecks and implementing optimizations;
- Uphold data integrity, reliability, and availability by implementing effective ETL processes and conducting data quality checks.
What we offer:
- We value your skills and ensure you’re rewarded accordingly;
- Enjoy 15 days of annual leave to recharge and maintain a healthy work-life balance;
- Fully remote work with a flexible setup;
- Our supportive and approachable HR team is always here to help you thrive.
