The AI Security Expert bridges the gap between cybersecurity and machine learning to protect AI systems from emerging threats while ensuring models are secure, compliant, and resilient. You will proactively manage risks unique to AI environments—such as adversarial attacks and data poisoning—to build trustworthy, production-ready systems that are robust against an evolving cyber threat landscape.
Requirements
- AI/ML Proficiency: Strong understanding of machine learning frameworks (e.g., PyTorch, TensorFlow) and the underlying mathematics of model architectures.
- Adversarial AI Knowledge: Proven experience with adversarial machine learning techniques, such as Gradient-based attacks, Evasion attacks, and Model Extraction.
- Secure Software Development: Expertise in securing CI/CD pipelines and containerized environments (Docker, Kubernetes) specifically for ML workloads.
- Data Protection: Proficiency in privacy-preserving technologies such as Differential Privacy, Homomorphic Encryption, or Federated Learning.
- Cloud Security: Deep experience with security configurations in AWS, Azure, or GCP, specifically regarding managed AI services (e.g., SageMaker, Vertex AI).
Benefits
- 100% Remote work
- Flexible working hours (AU/NZ business hours)
- New Zealand Holidays
