HimalayasHimalayas logo
NebiusNE

Technical Product Manager (Cluster Experience)

Nebius is a cutting-edge AI cloud platform that offers scalable infrastructure for developing and deploying AI solutions.

Nebius

Employee count: 201-500

CZ, FR + 2 more

Stay safe on Himalayas

Never send money to companies. Jobs on Himalayas will never require payment from applicants.

Why work at NebiusNebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we workHeadquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

The role

At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.
We’re looking for a Technical Product Manager to own product direction for Soperator — our Slurm-on-Kubernetes control plane for GPU clusters.
In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.
If you care about systems that combine performance, reliability, and developer experience at the frontier of AI infrastructure, this role is for you.

Your responsibilities will include:

• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.
• Define product direction end-to-end: problem discovery → solution design → delivery → adoption.
• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.
• Drive execution across platform teams: compute, networking, storage, observability, IAM and etc.
• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.
• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.
• Lead the open-source strategy and execution for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community.

We expect you to have:

• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.
• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.
• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., Slurm, Kubernetes, Ray).
• Track record of shipping technically complex products with multiple engineering teams.
• Strong communication and stakeholder management across engineering, research, and customers.
• Experience with product analytics, data-informed prioritization, and experimentation.
• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.

It will be an added bonus if you have:

• Experience with GPU platforms and HPC primitives: InfiniBand/RDMA, topology-aware scheduling, high-throughput storage.
• Practical understanding of modern ML training stacks: PyTorch, DeepSpeed, FSDP/ZeRO, NCCL.
• Familiarity with efficiency and reliability metrics: Goodput, MFU, failure modes, preemption handling, health checks.
• Exposure to large-scale LLM training/inference systems.
• Experience in observability, performance tuning, or SRE/reliability engineering.
• Customer-facing technical experience (solutioning, support, architecture advisory).

About Nebius

Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.

Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).

Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.

Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.

What we offer

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

About the job

Apply before

Posted on

Job type

Full Time

Experience level

Experience

3 years minimum

Location requirements

Hiring timezones

Netherlands +/- 0 hours, and 3 other timezones

About Nebius

Learn more about Nebius and their company culture.

View company profile

At Nebius, we offer an advanced AI cloud platform designed for those who wish to develop, tune, and deploy their AI models with the most efficient infrastructure available. Our platform utilizes cutting-edge NVIDIA GPU clusters, including the H100 and H200, optimized for maximum performance with InfiniBand. One of the standout features of Nebius is our comprehensive fine-tuning ecosystem that includes on-demand GPUs and tools necessary for robust dataset processing, ensuring that AI teams can efficiently manage their computational resources according to demand.

We recognize the importance of AI inference in deploying real-world applications. Hence, we provide a resilient and cost-effective infrastructure that has been optimized for rapid deployment of Generative AI applications. Our services span the entire lifecycle of AI solutions, from model training to inference, making Nebius not just a GPU cloud but a full-stack AI platform. Additionally, we pride ourselves on supporting our clients with 24/7 expert guidance, offering resources to help architects and engineers harness our AI-optimized data centers to build scalable solutions.

Claim this profileNebius logoNE

Nebius

View company profile

Similar remote jobs

Here are other jobs you might want to apply for.

View all remote jobs

157 remote jobs at Nebius

Explore the variety of open remote roles at Nebius, offering flexible work options across multiple disciplines and skill levels.

View all jobs at Nebius

Remote companies like Nebius

Find your next opportunity by exploring profiles of companies that are similar to Nebius. Compare culture, benefits, and job openings on Himalayas.

View all companies

Find your dream job

Sign up now and join over 100,000 remote workers who receive personalized job alerts, curated job matches, and more for free!

Sign up
Himalayas profile for an example user named Frankie Sullivan