We are hiring a Staff Attack Engineer specializing in AI/LLM security to join our team. You will break AI and agentic systems and turn that research into automated attacks inside NodeZero, our autonomous pentesting platform.
Requirements
- Attacking AI/LLM Systems
- Design and execute prompt injection and defense evasion attacks, focusing on generalized, reusable patterns.
- Conduct tool-use exploitation, abusing LLM agents’ access to code, file systems, APIs, and databases for attacker-realistic outcomes (e.g., context poisoning, RCE, data exfiltration, privilege escalation).
- Target AI infrastructure (model serving, training pipelines, vector databases, GPU/MLOps tooling) with an understanding of real-world enterprise deployments and misconfigurations.
- Research and apply model and supply chain attacks (poisoning, training data extraction, adversarial inputs, deployment pipeline abuse).
- Perform threat modeling for agentic systems, mapping trust boundaries and attack surfaces and turning them into concrete attack paths.
- Apply a strong productization mindset, turning manual techniques into safe, reliable, and scalable automated tooling.
- Building with LLMs
- Build and extend LLM-powered applications (prompting, structured output, agentic workflows).
- Design with production concerns in mind: cost, safety and hallucination guardrails, reliability, and observability.
- Design and extend microservices that orchestrate LLM tasks and integrate with NodeZero and related offensive workflows.
Benefits
- Inclusive Team
- Growth Opportunities
- Innovative Culture
- Remote Work
- Competitive Compensation
