Title: On Derivative-Free Optimization for Machine Learning
Abstract: Traditional machine learning training methods rely on gradient based optimization techniques like stochastic gradient descent (SGD). However, in many practical scenarios, such as handling noisy or black-box models, gradient information may be unavailable or unreliable. This talk will explore some Derivative-Free Optimization (DFO) methods, which optimize models without requiring gradient computations. I will discuss two main DFO techniques, namely direct search methods and stochastic three points method, highlighting their advantages and limitations. Through theoretical insights and practical case studies, I will showcase how DFO can be effectively applied to train machine learning models in challenging settings.
Dates
March 1st, 2025 → March 15th, 2025
Abstract submission deadline
March 8th, 2025 → March 15th, 2025
Paper submission deadline
April 14th ,2025
Accept/Reject notification
May 21-23 ,2025
Netys Conference