This is a job that we are recruiting for on behalf of one of our customers.
To apply, speak to Jack. He's an AI agent that sends you unmissable jobs and then helps you ace the interview. He'll make sure you are considered for this role, and help you find others if you ask.
Data Engineer
Company Description: Well-funded AI cybersecurity startup
Job Description:
As a Backend Engineer (Data Engineering), you'll architect and build production-grade data pipelines for a well-funded AI cybersecurity startup. This role involves designing and implementing data lake architecture, streaming pipelines, and transformation systems to process massive volumes of security data. You'll own the entire data lifecycle, ensuring reliability and performance to power AI agents protecting customer environments.
Location: Remote
Why this role is remarkable:
Ambitious data challenges at the intersection of generative AI and cybersecurity, building systems for proactive threat detection.
Join a well-funded startup backed by top-tier VCs, with a team of experienced leaders from Big Tech and Scale-ups.
Opportunity to build an AI-native company from the ground up, architecting the data foundation using cutting-edge technologies like Apache Iceberg.
What you will do:
Design, implement, and maintain scalable data pipelines that ingest gigabytes to terabytes of security data daily, processing millions of records rapidly.
Architect and evolve S3-based data lake infrastructure using Apache Iceberg, creating distributed systems for efficient storage and transformations.
Take end-to-end ownership of the complete data lifecycle, from Kafka ingestion to Spark/EMR transformations, enabling AI-powered analysis.
The ideal candidate:
7+ years of software engineering experience with at least 4+ years focused specifically on data engineering, demonstrating strong software engineering skills.
Proven track record building and scaling data ingestion systems handling gigabytes to terabytes daily, with experience at companies moving massive data volumes.
Deep, hands-on production experience with Python, Apache Kafka, and Apache Spark, using these technologies intimately.
How to Apply:
To apply for this job speak to Jack, our AI recruiter.
Step 1. Visit our website
Step 2. Click 'Speak with Jack'.
Step 3. Login with your LinkedIn profile.
Step 4. Talk to Jack for 20 minutes so he can understand your experience and ambitions.
Step 5. If the hiring manager would like to meet you, Jack will make the introduction
