Himalayas logo
NebiusNE

ML Engineer, Large Language Models (LLM Training & Inference Optimization)

Nebius is a cutting-edge AI cloud platform that offers scalable infrastructure for developing and deploying AI solutions.

Nebius

Employee count: 201-500

NL and GB only

Why work at NebiusNebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we workHeadquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with RD hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI RD team.

The role

This role is for Nebius AI RD, a team focused on applied research and the development of AI-heavy products. Examples of applied research that we have recently published include:

  • investigating how test-time guided search can be used to build more powerful agents;
  • dramatically scaling task data collection to power reinforcement learning for SWE agents;
  • maximizing efficiency of LLM training on agentic trajectories.

One example of an AI product that we are deeply involved in is Nebius AI Studio — an inference and fine-tuning platform for AI models.

We are currently in search of senior and staff-level ML engineers to work on optimizing training and inference performance in a large-scale multi-GPU multi-node setups.

This role will require expertise in distributed systems and high-performance computing to build, optimize, and maintain robust pipelines for training and inference.

Your responsibilities will include:

  • Architecting and implementing distributed training and inference pipelines that leverage techniques such as data, tensor, context, expert (MoE) and pipeline parallelism.
  • Implementing various inference optimization techniques, e.g. speculative decoding and its extensions (Medusa, EAGLE, etc.), CUDA-graphs, compile-based optimization.
  • Implementing custom CUDA/Triton kernels for performance-critical neural network layers.

We expect you to have:

  • A profound understanding of theoretical foundations of machine learning
  • Deep understanding of performance aspects of large neural networks training and inference (data/tensor/context/expert parallelism, offloading, custom kernels, hardware features, attention optimizations, dynamic batching etc.)
  • Expertise in at least one of those fields:
    • Implementing custom efficient GPU kernels in CUDA and/or Triton
    • Training large models on multiple nodes and implementing various parallelism techniques
    • Inference optimization techniques - disaggregated prefill/decode, paged attention, continuous batching, speculative decoding, etc.
  • Strong software engineering skills (we mostly use python)
  • Deep experience with modern deep learning frameworks (we use JAX PyTorch)
  • Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing
  • Strong communication and ability to work independently

Nice to have:

  • Familiarity with modern LLM inference frameworks (vLLM, SGLang, TensorRT-LLM, Dynamo)
  • Familiarity with important ideas in LLM space, such as MHA, RoPE, ZeRO/FSDP, Flash Attention, quantization
  • Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field Master’s or PhD preferred
  • Track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment
  • Experience in engineering complex systems, such as large distributed data processing systems or high-load web services
  • Open-source projects that showcase your engineering prowess
  • Excellent command of the English language, alongside superior writing, articulation, and communication skills

What we offer

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Hybrid working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

About the job

Apply before

Posted on

Job type

Full Time

Experience level

Mid-level
Senior

Location requirements

Hiring timezones

Netherlands +/- 0 hours, and 1 other timezone

About Nebius

Learn more about Nebius and their company culture.

View company profile

At Nebius, we offer an advanced AI cloud platform designed for those who wish to develop, tune, and deploy their AI models with the most efficient infrastructure available. Our platform utilizes cutting-edge NVIDIA GPU clusters, including the H100 and H200, optimized for maximum performance with InfiniBand. One of the standout features of Nebius is our comprehensive fine-tuning ecosystem that includes on-demand GPUs and tools necessary for robust dataset processing, ensuring that AI teams can efficiently manage their computational resources according to demand.

We recognize the importance of AI inference in deploying real-world applications. Hence, we provide a resilient and cost-effective infrastructure that has been optimized for rapid deployment of Generative AI applications. Our services span the entire lifecycle of AI solutions, from model training to inference, making Nebius not just a GPU cloud but a full-stack AI platform. Additionally, we pride ourselves on supporting our clients with 24/7 expert guidance, offering resources to help architects and engineers harness our AI-optimized data centers to build scalable solutions.

Claim this profileNebius logoNE

Nebius

View company profile

Similar remote jobs

Here are other jobs you might want to apply for.

View all remote jobs

52 remote jobs at Nebius

Explore the variety of open remote roles at Nebius, offering flexible work options across multiple disciplines and skill levels.

View all jobs at Nebius

Remote companies like Nebius

Find your next opportunity by exploring profiles of companies that are similar to Nebius. Compare culture, benefits, and job openings on Himalayas.

View all companies

Find your dream job

Sign up now and join over 100,000 remote workers who receive personalized job alerts, curated job matches, and more for free!

Sign up
Himalayas profile for an example user named Frankie Sullivan
Nebius hiring ML Engineer, Large Language Models (LLM Training & Inference Optimization) • Remote (Work from Home) | Himalayas