Binance is seeking an Algorithm Engineer specializing in Large Language Models (LLMs) to build robust AI guardrails and safety frameworks for their AI-powered products. The role focuses on ensuring trust, compliance, and reliability in AI-driven solutions such as customer support chatbots and compliance systems. The engineer will design, build, and maintain AI safety tools to mitigate prompt injection, hallucinations, and other safety risks.
Requirements
- Design and build an AI Guardrails framework as a safety layer for LLMs and agent workflows.
- Define and enforce safety, security, and compliance policies across applications.
- Detect and mitigate prompt injection, jailbreaks, hallucinations, and unsafe outputs.
- Implement privacy and PII protection.
- Build red-teaming pipelines, automated safety tests, and risk monitoring tools.
- Continuously improve guardrails to address new attack vectors, policies, and regulations.
- Fine-tune or optimise LLMs for trading, compliance, and Web3 tasks.
- Collaborate with Product, Compliance, Security, Data, and Support.
- Strong coding skills in Python or Java
- Understanding of privacy, PII handling, and data governance
- Solid communication and collaboration skills
Benefits
- Competitive salary
- Company benefits