Keynote Talk: Introduction to Distributed ML workloads with Ray on Kubernetes

Abstract:
The rapidly evolving landscape of Machine Learning and Large Language Models demands efficient scalable ways to run distributed workloads to train, fine-tune and serve models. Ray is an Open Source framework that simplifies distributed machine learning, and Kubernetes streamlines deployment.
In this introductory talk, we’ll uncover how to combine Ray and Kubernetes for your ML projects.
You will learn about:

  • – Basic Ray concepts (actors, tasks) and their relevance to ML
  • – Setting up a simple Ray cluster within Kubernetes
  • – Running your first distributed ML training job
Dates

March 11, 2026

Abstract submission deadline

March 18, 2026

Paper submission deadline

April 22, 2026

Accept/Reject notification

June 10-12, 2026

Netys Conference

Proceedings

Partners & Sponsors (TBA)