Autoware.Auto
More-Thuente Line Search

We implement a More-Thuente Line Search method as one of the line search methods. It implements and CRPT interface of the LineSearch class from the line_search.hpp.

The implementation closely follows the paper "Line Search Algorithms with Guaranteed Sufficient Decrease" by Jorge J. More and David J. Thuente. We have a short summary of that paper in one of the sections below.

Target use cases

Line search algorithms are commonly used to aid optimization problems such as Newton's method.

More-Thuente line search is a more robust version of a line search than one with a fixed step. It guarantees picking a step that yields a sufficient decrease in the objective function, while at the same time aiming to have the resulting value of the objective function close to the optimum.

We aim to use this method as an aid to the Newton's method used in the NDT implementation for localization.

Assumptions

We assume here, that one of the following two cases holds:

  • We search for a minimum of the function \(f\) starting at some point \(x_0\). Then \(\phi^\prime(0) = f^\prime(x_0) < 0\).
  • We search for a maximum of the function \(f\) starting at some point \(x_0\). Then \(\phi^\prime(0) = f^\prime(x_0) > 0\).

The implementation will implicitly assume the type of the problem (minimization vs maximization) based on the sign of the derivative \(f^\prime(x_0)\).

Additionally, the step \(\alpha\) as well as its bounds must be non-negative.

Short paper recap

Definitions

It is assumed that the objective function \(\phi: \mathbb{R} \rightarrow \mathbb{R}\) defined on \([0, \infty]\) is smooth and continuously differentiable with \(\phi^\prime(0) < 0\) we search for such a step \(\alpha > 0\) that that so-called Strong Wolfe Conditions hold (Equations 1.1 and 1.2 in the paper):

\[ \phi(\alpha) \le \phi(0) + \mu \phi^\prime(0) \alpha \\ \mid \phi^\prime(\alpha) \mid \le \eta \mid \phi^\prime(0) \mid \]

Usually, \(\mu\) is a small value below \(1/2\) and \(\eta\) is a value close to \(1\). Note that \(\mu \le \eta\).

In our case, with an optimization function \(f(x)\), starting point \(x_0\), direction of optimization \(d\) and step length \(\alpha\), we define the function \(\phi\) as follows (Equation 1.3 in the paper):

\[ \phi(\alpha) \equiv f(x_0 + \alpha p), \hspace{5mm} \alpha \geq 0 \]

During the procedure we make use of an auxiliary function \(\psi(\alpha)\) defined as follows (just before Equation. 2.1 in the paper):

\[ \psi(\alpha) \equiv \phi(\alpha) - \mu \phi^\prime(0) \alpha \]

Iterative search for step length

The algorithm can be summarized as follows (follows the Search Algorithm in Section 2 of the paper).

Note
This algorithm uses function \(\psi\) in steps 2. and 3. until the following holds:

\[ \psi(\alpha_t) \le 0, \hspace{10mm} \phi^\prime(\alpha_t) > 0 \]

After this statement becomes true the algorithm above starts using function \(\phi\) in the steps 2. and 3. instead.

For a given step \(\alpha_t\) and an interval of values \([\alpha_l, \alpha_u]\):

  1. Check if the Strong Wolfe Conditions hold for \(\alpha_t\). If they do - terminate the procedure with \(\alpha_t\) as the result.
  2. Generate next step length from the interval \([\alpha_l, \alpha_u]\) and the current step using either function \(\psi\) or \(\phi\). This part is shown in the Section 4 "TRIAL VALUE SELECTION" in the paper.
  3. Update the interval \([\alpha_l, \alpha_u]\) using either function \(\psi\) or \(\phi\) and the current step \(\alpha_t\). This part is covered in two algorithms: Updating Algorithm (right after theorem 2.1 in the paper) when the \(\psi\) function is still used and Modified Updating Algorithm (shown after theorem 3.2 in the paper) used after we switch to using function \(\phi\). These algorithms differ solely in the function used within them.