Keynote Talk: Introduction to Distributed ML workloads with Ray on Kubernetes
Abstract:
The rapidly evolving landscape of Machine Learning and Large Language Models demands efficient scalable ways to run distributed workloads to train, fine-tune and serve models. Ray is an Open Source framework that simplifies distributed machine learning, and Kubernetes streamlines deployment.
In this introductory talk, we’ll uncover how to combine Ray and Kubernetes for your ML projects.
You will learn about:
- – Basic Ray concepts (actors, tasks) and their relevance to ML
- – Setting up a simple Ray cluster within Kubernetes
- – Running your first distributed ML training job
Dates
February 29 ,2024 March 11 ,2024
Abstract submission deadline
March 7 ,2024 March 18 ,2024
Paper submission deadline
April 22 ,2024
Accept/Reject notification
May 12 ,2024
Camera ready copy due
May 27-28 ,2024
Metis Spring school
May 29-31 ,2024
Netys Conference