Title: On Derivative-Free Optimization for Machine Learning
Abstract: Traditional machine learning training methods rely on gradient based optimization techniques like stochastic gradient descent (SGD). However, in many practical scenarios, such as handling noisy or black-box models, gradient information may be unavailable or unreliable. This talk will explore some Derivative-Free Optimization (DFO) methods, which optimize models without requiring gradient computations. I will discuss two main DFO techniques, namely direct search methods and stochastic three points method, highlighting their advantages and limitations. Through theoretical insights and practical case studies, I will showcase how DFO can be effectively applied to train machine learning models in challenging settings.
Dates
March 11, 2026
Abstract submission deadline
March 18, 2026
Paper submission deadline
April 22, 2026
Author notification
June 10-12, 2026
Netys Conference


