RGA -algorithmforblack boxglobal optimization.

is the acronym, which stands forRGARegularizedGlobalApproximation

Thealgorithm forRGA -black boxglobal optimization is a tool for finding a global optimum/optima of a cost/(objectivefunctionOF) ofvariables, subjected to general constraints: the goal is to get the result with as fewnOF-calls as possible.

TheOFis not assumed differentiable or continuous *, and the values which are returned by theOFin response to relevantOF-calls can be corrupted with noise.

The algorithm is in fact asmooth nonlinear estimatorwith unsupervised leaning.

It is a powerful tool for optimization problems governed with expensive or/and defined in multi-dimensional spacesOFs, and for estimation of (vector-valued) functions or functionals, defined on multi-parameter families of functions.

Thealgorithm has noRGA -external adjustableparameters, all parameters are self- adapted. 1. to get an

- The
algorithm allows one:RGA -approximation() of a functionfofFvariables, i.e. tonestimatetheOFatanypoint of a given domain.

2. to find extreme points (minima/maxima/global extremum) of the functionin the given domain.F1. to indicate a finite domain of search/constrains

- The algorithm requires:

2. to evaluate theOFon the set ofpoints, which are implied by the algorithmn-Dall of intrinsic points, which are representative for the

- The algorithm adjusts:
OF.

- The algorithm is based upon:
of theRegularizedGlobalApproximationOF(-algorithm).RGA

Given the values of theOFon an (initial) set ofstartingpoints, a differentiable (n-Dregularized)approximationof the functionfFis constructed in the entire domain of search (global approximation).

The next step consists in expansion of the initial set of points by all the extreme points (minima and maxima) of the (differentiable) approximation. ThefOFshould be evaluated at these points. Now the expanded set of theOF-values constitutes a base for constructing a new approximation.

So, throughout its every step ('try') the algorithm "enriches" the previous set of points and constructs a new approximationafter thefOFreturns the values on the subset of new (intrinsic) points.

The approximate functioncan be interpreted asfpredictionof the true value at any point of the domain of search.

The algorithm is terminated when the minimum of having been calculated values of theOFis reached namely at that point which has been predicted as the point of global minimum provided the value itself is well-predicted.

Block-schema of RGA-algorithm **.

{"Injection" of a set of random pointsStarter:|x_{k }k = 1, 2,..., K}. An(external routine/ controlled processblack box)returns the values(when noisy, two first moments of the noise are assumed to be given)of an objective function{F()}x_{k}.Predictor:Approximation *** of F byf:[ S_{k }((f(x_{k}) - F) )x_{k}^{2 }+ ((f), ex^{- tL }(f)) ] ->xmin;(Lis Laplacian ,tis a parameter of regularization; we called the functional "diffusion regularization " because of{d_{t}-L}, and the relevant space as "Sobolev space of infinite order" - because of expansion of the exponential operator;the first term is a conventional misfit function in an Euclidean K-dimensional space; K is the total number of points{};x_{k}the second one is an inner product in an infinite-dimensional Hilbert space with the weighte^{- tL }).Self-learning circuit:The last modification of theRGAhas concerned a parameter of regularization: instead of ascalar parameterof regularization"t"we introduced atensor("anisotropic smoothness" ). The tests of the,RGAwith Rosenbrock's functionwhich has a long curved valley, showed that the relevant estimate of the tensor is fairly reliable. See also the Lennard-Jones cluster folding.and so forth:Corrector,As long as we get a differentiable linear estimate of the function "ef", and the Green's function of^{- tL }is so "pretty" =e^{tL}, it is easy to get estimated all necessary gradients/Hessians/etc at any point of interest, and thus, starting from given points{}x_{k}to find a set oflocalextrema off. Thenx*= arg min_{{k+1} }{F(}x_{k})is thepoint ofpredictedglobal minimum. Afterwards it is necessary just to check a reliability of the prediction ("stopping rule").

A few illustrations of RGA-tests/applications

1. Gennady A. Ryzhikov and Marina S. Biryulina, 1995,3D nonlinear inversion by Entropy of Image Contrast (EnIC) optimization2. Ryzhikov, G.A., Biryulina, M.S. and Keydar, S., 1995,Analysis of 1D seismic waveform inversion by Regularized Global Approximation

* Sure, OF is assumed to be continuous/smooth in a small vicinity of a global minimum.** Calligraphic F stands here for the approximate function f*** By no means our approximation leads to loss of accuracy: see, e.g. protein folding. Due to the use of the so called "partly unbiased estimation" we are dealing in fact with interpolation/extrapolation of the function F.

Геннадий Рыжиков