GradientLess Descent: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
* Noisy/Stochastic oracle access. | * Noisy/Stochastic oracle access. | ||
For the purpose of this paper, we consider convex smooth objective noiseless functions, where we have access to function computations but not gradient computations. | For the purpose of this paper, we consider convex smooth objective noiseless functions, where we have access to function computations but not gradient computations. This class of functions is quite common in practice; for instance, they make special appearances in the reinforcement learning literature. | ||
===Zeroth-Order Optimisation=== | ===Zeroth-Order Optimisation=== |
Revision as of 16:24, 31 October 2020
Introduction
Motivation and Set-up
A general optimisation question can be formulated by asking to minimise an objective function [math]\displaystyle{ f : \mathbb{R}^n \to \mathbb{R} }[/math], which means finding: \begin{align*} x^* = \mathrm{argmin}_{x \in \mathbb{R}^n} f(x) \end{align*}
Depending on the nature of [math]\displaystyle{ f }[/math], different settings may be considered:
- Convex vs non-convex objective functions;
- Differentiable vs non-differentiable objective functions;
- Allowed function or gradient computations;
- Noisy/Stochastic oracle access.
For the purpose of this paper, we consider convex smooth objective noiseless functions, where we have access to function computations but not gradient computations. This class of functions is quite common in practice; for instance, they make special appearances in the reinforcement learning literature.