Meta is seeking Research Engineers to join the Multimodal Embodiment Trust team within Meta Superintelligence Labs, dedicated to advancing the safe development and deployment of Superintelligent AI.
Requirements
- Design, implement, and evaluate novel, systemic, and foundational safety techniques for large language models and multimodal AI systems
- Create, curate, and analyze high-quality datasets for safety system and foundations
- Fine-tune and evaluate LLMs to adhere to Meta’s safety policies and evolving global standards
- Contribute to applied research through risk analysis, experimentation, measurement, and and building mitigations
- Work closely with researchers, engineers, and cross-functional partners to integrate safety solutions into Meta’s products and services
Benefits
- Generous Paid Time Off
- 401k Matching
- Retirement Plan
- Bonus
- Equity
