• No results found

Calibration of Dupire local volatility model using genetic algorithm of optimization

N/A
N/A
Protected

Academic year: 2022

Share "Calibration of Dupire local volatility model using genetic algorithm of optimization"

Copied!
31
0
0

Повний текст

(1)

DOI 10.33111/nfmte.2018.003

CALIBRATION OF DUPIRE LOCAL VOLATILITY MODEL USING   GENETIC ALGORITHM OF OPTIMIZATION 

Maksym Bondarenko  

Master’s Degree in International Economics, 

Master’s Degree in Quantitative Finance and Risk Management,  Consultant Developer C++ 

CapFi Groupe 

25–27 Place de la Madeleine, Paris, 75008, France   [email protected] 

Victor Bondarenko  

PhD (Technical Sciences), Docent,  Associate Professor of Department of Design  

of Electronic Computational Equipment 

National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute» 

37 Peremohy Avenue, Kyiv, 03056, Ukraine  [email protected] 

The problem of calibration of local volatility model of Dupire has been  

formalized.  It  uses  genetic  algorithm  as  alternative  to  regularization  approach with further application of gradient descent algorithm.  

Components  that  solve  Dupire’s  partial  differential  equation  that  represents dynamics of underlying asset’s price within Dupire model  have been built. This price depends in particular on values of volatility  parameters.  Local  volatility  is  parametrized  in  two  dimensions  (by  Dupire  model):  time  to  maturity  of  the  option  and  strike  price  (execution price). On maturity axis linear interpolation is used while  on  strike  axis  we  use  B‐Splines.  Genetic  operators  of  mutation  and  selection are then applied to parameters of B‐Splines. 

Resulting  parameters  allow  us  to  obtain the values  of  local volatility  both  in  knot  points  and  intermediate  points  using  interpolation  techniques.  Then  we  solve  Dupire  equation  and  calculate  model  values of option prices. 

To calculate cost function we simulate market values of option prices  using classic Black‐Scholes model. 

An experimental research to compare simulated market volatility and  volatility obtained by means of calibration of Dupire model has been  conducted. The goal is to estimate the precision of the approach and  its usability in practice. 

To estimate the precision of obtained results we use a measure based  on  average  deviation  of  modeled  local  volatility  from  values  used  to  simulate market prices of the options.  

The research has shown that the approach to calibration using genetic  algorithm of optimization requires some additional manipulations to  achieve  convergence.  In  particular  it  requires  non‐uniform  discretization of the space of model parameters as well as usage of de 

(2)

Boor  interpolation.  Value  0.07  turns  out  to  be  the  most  efficient  mutation  parameter.  Using  this  parameter  leads  to  quicker  convergence.  It  has  been  proved  that  the  algorithm  allows  precise  calibration of local volatility surface from option prices. 

Keywords: genetic algorithm, stochastic optimization, local volatility,  implied  volatility,  calibration,  partial  differential  equations,  Black  Scholes model, Dupire model. 

КАЛІБРУВАННЯ МОДЕЛІ ЛОКАЛЬНОЇ ВОЛАТИЛЬНОСТІ   ДЮПІРА ІЗ ЗАСТОСУВАННЯМ ГЕНЕТИЧНОГО АЛГОРИТМУ  

ОПТИМІЗАЦІЇ 

М. В. Бондаренко 

Магістр з міжнародної економіки, 

магістр в галузі кількісних фінансів та ризик‐менеджменту,  консультант‐розробник С++ 

CapFi Groupe 

пл. Мадлен, 25‐27, м. Париж, 75008, Франція  [email protected] 

В. М. Бондаренко 

Кандидат технічних наук, доцент, 

доцент кафедри конструювання електронно‐обчислювальної апаратури  Національний технічний університет України  

«Київський політехнічний інститут імені Ігоря Сікорського» 

проспект Перемоги, 37, м. Київ, 03056, Україна  [email protected] 

Формалізовано задачу калібрування моделі локальної волатиль‐ 

ності Дюпіра із застосуванням генетичного алгоритму оптиміза‐

ції, як альтернативу підходу «регуляризації» із подальшим вико‐

ристанням алгоритму градієнтного спуску.  

Побудовано компоненти для розв’язання диференційного рівнян‐

ня Дюпіра, яке відображає динаміку ціни на базовий актив в рам‐

ках моделі  Дюпіра. Така  ціна, окрім іншого, залежить від  значень  параметрів локальної волатильності, яку параметризовано за дво‐

ма вимірами (за моделлю Дюпіра): часу до експірації опціона та ці‐

ною страйк (ціною виконання). За віссю часу використано лінійну  інтерполяцію,  а  за  віссю  страйк  –  В‐сплайни.  До  параметрів   В‐сплайнів застосовано генетичні оператори селекції та мутації.  

Результуючі  параметри  дозволяють  отримати  значення  локаль‐

ної волатильності у вузлових точках, а також в проміжних точках  шляхом  інтерполяції.  Після  цього  шляхом  розв’язку  рівняння  Дюпіра отримуються модельні значення цін на опціони. 

Для  розрахунку  цільової  функції  промодельовано  ринкові  зна‐

чення  цін  на  опціони  з  використанням  класичного  варіанту  мо‐

делі Блека‐Шоулза. 

(3)

Проведено  експериментальне  дослідження  з  порівняння  модельо‐

ваної  ринкової  волатильності  та  волатильності,  отриманої  шля‐

хом калібрування моделі Дюпіра, для оцінки ефективності підхо‐

ду і аналізу можливості його використання на практиці.  

Для  оцінки  точності  отриманих  результатів  використано  міру,  що  базується на  середньому  відхиленні  модельованої  локальної  волатильності, отриманої шляхом калібрування моделі, від реаль‐

них значень ринкових цін на опціони. 

Дослідження показало, що підхід до калібрування з використанням  генетичного алгоритму оптимізації вимагає застосування додатко‐

вих  маніпуляцій  для  досягнення  збіжності  алгоритму,  зокрема  ви‐

користання нерівномірної дискретизації простору параметрів моде‐

лі,  а  також  алгоритму  інтерполяції  Де  Бура.  Виявлено  найбільш  ефективне значення параметру мутації для даної задачі, яке дорів‐

нює  0,07.  За  цього  значення  збіжність  алгоритму  досягається  най‐

швидше.  Доведено,  що  алгоритм  здатен  досить  точно  калібрувати  поверхню локальної волатильності з ринкових цін на опціони.  

Ключові  слова: генетичний  алгоритм,  стохастична  оптиміза­

ція,  локальна  волатильність,  імпліцитна  волатильність,  каліб­

рування,  рівняння  в  частинних  похідних,  модель  Блека­Шоулза,  модель Дюпіра. 

КАЛИБРОВКА МОДЕЛИ ЛОКАЛЬНОЙ ВОЛАТИЛЬНОСТИ   ДЮПИРА С ИСПОЛЬЗОВАНИЕМ ГЕНЕТИЧЕСКОГО АЛГОРИТМА  

ОПТИМИЗАЦИИ 

М. В. Бондаренко 

Магистр международной экономики, 

магистр в области количественных финансов и управления рисками,  консультант‐разработчик С++ 

CapFi Groupe 

пл. Мадлен, 25‐27, г. Париж, 75008, Франция  [email protected] 

В. М. Бондаренко 

Кандидат технических наук, доцент, 

доцент кафедры конструирования электронно‐вычислительной аппаратуры  Национальный технический университет Украины  

«Киевский политехнический институт имени Игоря Сикорского» 

проспект Победы, 37, г. Киев, 03056, Украина  [email protected] 

Формализована задача калибровки модели локальной волатиль‐ 

ности Дюпира с использованием генетического алгоритма опти‐

мизации как альтернатива подходу «регуляризации» с дальней‐

шим использованием алгоритма градиентного спуска. 

(4)

Построены компоненты для решения дифференциального урав‐

нения  Дюпира,  которое  отражает  динамику  цены  на  базовый   актив в рамках модели Дюпира. Такая цена, кроме прочего, зави‐

сит от значений параметров локальной волатильности, парамет‐

ризированной по двум измерениям (по модели Дюпира): време‐

ни  до  экспирации  опциона  и  ценой  страйк  (ценой  исполнения). 

По оси времени использована линейная  интерполяция, а по оси  страйк – В‐сплайны. К параметрам В‐сплайнов применены гене‐

тические операторы мутации и селекции. 

Результирующие  параметры  позволяют  получить  значение  ло‐

кальной  волатильности  как  в  узловых,  так  и  в  промежуточных  точках  путем  интерполяции.  После  этого  решается  уравнение  Дюпира и рассчитываются модельные значения цен на опционы. 

Для  расчета  целевой  функции  моделируем  рыночные  значения  цен на опционы с использованием классического варианта моде‐

ли Блека‐Шоулза. 

Проведено экспериментальное исследование по сравнению смоде‐

лированной  рыночной  волатильности  и  волатильности,  получен‐

ной путем калибровки модели Дюпира, для оценки эффективности  подхода и анализа возможности его применения на практике. 

Для оценки точности полученных результатов использована ме‐

ра, которая базируется на среднем отклонении моделированной  локальной  волатильности,  полученной  путем  калибровки  моде‐

ли, от реальных значений рыночных цен на опционы. 

Исследование  показало,  что  подход  к  калибровке  с  использова‐

нием  генетического  алгоритма  оптимизации  требует  примене‐

ния  дополнительных  манипуляций  для  достижения  сходимости  алгоритма, а именно – неравномерной дискретизации простран‐

ства параметров модели, а также интерполяции Де Бура. Опреде‐

лено,  что  наиболее  эффективное  значение  параметра  мутации  для  данной  задачи  равняется  0,07,  при  котором  сходимость   алгоритма достигается максимально быстро. Доказано, что алго‐

ритм  способен  довольно  точно  калибровать  поверхность  локаль‐

ной волатильности из рыночных цен на опционы. 

Ключевые  слова: генетический  алгоритм,  стохастическая  оп­

тимизация,  локальная  волатильность,  имплицитная  волатиль­

ность,  калибровка,  уравнения  в  частных  производных,  модель  Блека­Шоулза, модель Дюпира. 

JEL Classification: C15, C61, G12

1. Introduction  1.1. Motivation 

Since the famous Black-Scholes model was introduced in 1973, option markets have evolved to become autonomous, organized markets with a fairly high degree of liquidity, especially for index

(5)

options and foreign exchange options. In such markets, the market prices of a series of liquid options, which are often call or put options, are readily observed. These market prices are then used as a benchmark to «mark to market» or calibrate an option pricing model, which can then be used to compute prices of more complex («exotic») options or compute hedge ratios [1].

Determining of underlying asset’s volatility from the set of observed market prices is called «model calibration» problem. It is the inverse problem to the option pricing problem.

Since the financial crush of 1987 the practitioners have noticed that the constant parameter of volatility calibrated as implied volatility from standard Black-Scholes model is not relevant anymore. In reality the volatility varied among different strikes and maturity dates for benchmark options. It has led to development of generalized models, including local and stochastic volatility models.

In the case of parametric approach the volatility parameter is defined as finite dimensional vector (for Local Volatility, Stochastic volatility or CEV model). Calibration problem lays in finding such value of parameters that the set of prices obtained from the model matches the set of observed market prices.

However, in practice the market information is not sufficient to completely identify a pricing model. If the model is sufficiently rich, several sets of model parameters can be compatible with market prices, leading to model ill-posedness and uncertainty [1].

In practice calibration problem is introduced as optimization problem: the goal is to match the market prices as close as possible.

The common measure of discrepancy between model prices and observed market prices is a sum of squared differences between them.

The weighted cost function is defined as following:

( )

I

( ) ( )

i

i

i i t i

i

t T K Call T K w

S t Call

G 2

1

market

model , , , ,

=

θ = , (1)

where S – spot price of the underlying asset; K – strike price of the option; T – maturity of the option; w – weight of the given option in cost function; t – time at which we price the option (normally today’s date).

To minimize this cost function the classical gradient methods could be used. However the cost function is neither convex nor does it

(6)

have any particular structure enabling the use of gradient-based minimization methods to locate the minima [1]. Moreover, the function is not given explicitly and solutions are obtained by solving a partial differential equation or executing a Monte-Carlo simulation.

Finally we are not even sure that the gradient-based algorithm will not fall in a local minima instead of a global one.

There are several approaches to regularize the ill-posed problem as well as introduce some convexity penalization criteria to ensure the ability of gradient-based algorithm to find a well-matching set of parameters. The motivation of this work is to implement alternative approach introduced by Ben-Hamida and Cont that does not suppress the ill-posed character of the problem but rather reflects it. The genetic optimization algorithm is used to overcome the problems of non-convexity and multiplicity of local minima.

1.2. Literature review 

Current research is inspired by the results of Ben-Hamida and Rama Cont paper [1]. The authors introduce a probabilistic approach for calibration of local volatility surface from the set of observed market prices. Calibration involves usage of genetic algorithm: its steps are applied to local volatility surface presented as cubic B-Spline on strike axis and linear interpolation on maturity axis. The parameters of B-Spline are exposed to cycles of genetic transformations that lead to obtaining the fittest surface.

Ben-Hamida and Cont in their article claim that Dupire formula for obtaining local volatility from the market prices is unstable: Dupire [2]

presents a formula for reconstructing local volatility functions from a continuum of call option prices; however, this formula involves taking derivatives from discrete data and is numerically unstable. A discretized version of the Dupire formula is the implied tree method of Derman et al (1996), which is prone to similar instabilities leading to «negative probabilities» [1]. That is why Ben-Hamida and Cont describe an alternative way to calibrate the local volatility directly.

Raphael Cerf [3] and Del Morel [4] introduce the main steps of particle genetic algorithm and prove its convergence. Cerf proposes to decrease the temperature parameter while moving through algorithm’s iterations. He suggests that this (progressive increase of selection pressure) will lead the population to concentrate around the global minima. Alternative implementation of natural (binary) genetic

(7)

algorithm is described by Frank Dieterle [5] and Noraini Mohd Razali, John Geraghty [6]. Both authors focus on the importance of selection procedure in binary approach. For this research we have nevertheless chosen the particle algorithm as it requires less computational cost in general case.

J.Frederic Bonnans, Jean-Marc Cognet and Sophie Volle [7] use the Newton-Raphson algorithm for calibration of local volatility and regularize the problem by using the same regularization term as Lagnado and Osher [8]. As an alternative they introduce another way to regularize the problem – multiscale approach. Howison [9] claims that the regularization method provided by Lagnado and Osher [8] has some drawbacks among which is computational cost. Ben-Hamida and Cont, however, reject usage of regularization as it is not necessary for direct calibration of local volatility. In this research we do not add any regularization terms to the cost function.

Bonnans, Cognet and Volle also introduce a usage of implicit schema to solve the PDE of Dupire that permits us to obtain model prices. In [7]

the authors prove that usage of implicit schema with uniform discretization schema will give incoherent results for the near-the-strike prices and non-uniform discretization will permit us to obtain more precise prices near-the-strike. We will use the same non-uniform schema in our research. Howison [9] as well as Ben-Hamida and Cont uses weighted minimization (weight of near-the-strike prices in computation of cost function is bigger than the weight of less tradable options). The results of this research are obtained by using weighted approach as well.

Li and Coleman [10] introduce the cubic spline approximation of local volatility function. In our research however we still use its representation in B-Splines proposed by Ben-Hamida and Cont [1] as it is well adjusted to the regularization approach that they use. In future work we suggest usage of De Boor’s algorithm of spline interpolation as it reflects better the optimization problem and possibly can improve the convergence.

1.3. Research objectives 

The objective of current research is to introduce the state of the art in local volatility calibration problem and implement the algorithm to solve this problem. We aim to obtain coherent results of calibration avoiding at the same time high dimensionality of the problem and reducing the computational costs by applying interpolation techniques.

(8)

We are also willing to enhance the existing approach by improving algorithm’s components. Introducing and calculating a measure of quality (Section 2.3) of the calibration is a novelty of current work.

Firstly we aim to produce a general view on local volatility calibration and introduce a formal representation of the problem including the cost function. We will also do a brief overview of particle genetic algorithm.

Then we introduce Dupire framework and challenges that raise up within it. The goal is to provide methodological guide to implement next brick of the algorithm: Dupire differential equation solver that is used to calculate the cost function on every iteration of genetic algorithm.

Finally we introduce the requirements for the input of optimization problem – local volatility surface. We also provide a guideline to create a smooth representation of the local volatility surface using B- Spline interpolation with uniform knots.

In the last section we provide numerical results obtained by running the genetic algorithm to calibrate local volatility surface. We estimate a quality of the approach using an average-based measure.

The global goal of this research is to prepare the basis for the following improvements in the analysis of predictive power of volatility. Obviously the set of obtained model parameters can be used to study the predictive ability of local volatility by regressing local volatility parameter as dependent variable on a time variable. This will be the next step to introduce a neural network constructed from a set of regression models correspondent to different volatility models (local, stochastic, implied volatility etc.). Such a network can potentially improve banks’ forecasts for realized market volatility and thus improve the pricing accuracy. It can be also used for overnight risk reports and pricing valuation adjustments that require pricing of options at future times.

2. Theoretical basics 

2.1. Calibration problem setting 

The optimization problem where the input is a parameterized local volatility surface is:

( ) ( ) ( ) ( )

ij

discT i

discK j

j i t j i

EG G C t St T K C T K 2w,

1 1

* ,

, , ,

inf ,

∑ ∑

= =

= θ

θ θ θ (2)

(9)

where C – a market observed price of a Call; Cθ – a theoretical price obtained for a set of market parameters and a local volatility surface corresponding to a set of parameters θ; wi,j – the weights determined by the fact how for the option is in- or out-of-money; discT, discK – number of values (discretization) of time and strike axis respectively.

As the function G(·) is neither convex nor continuous, the gradient based approaches will not work in the regularization of the problem.

The function G(·) is not given explicitly as well. The solution for Cθ is obtained by solving the Dupire PDE (Section 3.2) for the corresponding set θ. To regularize the input (smooth volatility surface), we use B-Spline representation in Section 4.1.

Until the stopping criteria is achieved we improve the set of values Cθ by adjusting the parameter set θ using the genetic algorithm (Section 2.2) as one that does not requires any gradient calculations and reduces the dimension of the problem. Adjusting of parameter θ allows us to obtain a new (improved) smooth representation of a function σ(K,T) that is now a local volatility function.

2.2. Particle Genetic Algorithm 

Genetic algorithms are stochastic search methods based on natural evolution processes. They are defined as a system of particles (or individuals) evolving randomly and undergoing adaptation in a time non necessarily homogeneous environment represented by a collection of fitness functions [4].

The simplest genetic algorithm is a two-stages and time in- homogeneous Markov chain given for each n >= 0 by setting

Here we have:

Integer N represents the size of population;

The initial system consists of N independent random particles;

Gibbs measure is introduced as exp / , where is a cost function and T is a temperature parameter;

In the selection transition the particles are chosen randomly and independently of previous configuration according to a given measure (Gibbs) G(ξn) with probability of selection of the i-th

(10)

individual equals | and probability of its replacement by another individual equals | ;

Mutation Kernel can vary however we use the simplest

representation 2   1 ;

t is a step (small value) and T is decreasing wth increasing number of iterations;

we run algorithm until at least one member of a population ξn

will satisfy stopping criteria (cost function will be minimized) i.e. sum of square differences will be sufficiently small.

Genetic algorithm is used in practice for several reasons:

It does not requires differentiability of the cost function;

Alternatively to Newton-Raphson algorithm falls to the global minima when the function is not strictly convex;

Converges relatively fast.

The local volatility calibration problem falls into these categories.

2.3. Solution quality criterion 

We consider the calibration successful if the average value of volatilities calibrated for at-the-money options (i.e. such that the strike equals to the spot price of the underlying) at every maturity lays in 30 % threshold with respect to the value σ = 0.2 (we use σ = 0.2 for simulating market price of the option at every point (i,j) and for the sake of simplicity it is taken constant). In the case of absolute convergence σ(·,at) = 0.2, where at is an index of at-the-money option. The simple average:

,

.

. . (3)

And the quality measure:

0.3 (4)

where I(·) – an indicator function returning 1 if the argument is true (deviation is small enough and quality is sufficient for given strike) or 0 if argument is false (quality is insufficient).

However previous argument can be too weak to represent overall quality of the algorithm as we are interested not specifically in local

(11)

volatility calibrated from at-the-money options (one strike) but rather in volatilities from the set of close-to-the-money options. So we introduce new measure

. (5) Of course is used only when discK = 50 in pair with at = 25 as

it represents 10 prices close-to-the-money. If we change discretization this value is to be changed (however in this paper we take into account 10 values). Average-based measure Q is basically used with an old measure as an argument. We aim to obtain the output close to 1. Q = 1 corresponds to absolute quality, while Q = 0 corresponds to bad quality.

We evaluate this measure in numerical results.

3. Local volatility theoretical basics  3.1. Implied and local volatilities 

In the standard Black-Scholes model the volatility parameter is supposed to be constant with respect to maturity and strike. However on the real market we observe that the volatility varies depending on both of the variables.

Generalized Black-Scholes framework permits us to solve the inverse problem for every maturity T and obtain the implied volatility values, however it still supposes that volatility is flat with respect to strike axis (i.e. for any maturity volatility is the same for any strike option), while we can observe the «smile» in the real market.

Bruno Dupire [2] in his research introduces the local volatility as a function of time and stock price St which can be rewritten in terms of maturity and strike:

, . (6)

Dupire formula for local volatility can be rewritten in terms of implied volatilities as well:

(12)

,

(7)

or implementing y = K/F(t,T):

( )

2 2 2

2 1 2 1

2

ˆ ˆ ˆ 2 ˆ

1

2ˆ ˆ ,

dy d dy

d d d T dy y yd d

dT T d y

T

Σ Σ

+

+ ⎛ Σ

+ Σ

Σ Σ + Σ

σ = . (8)

Being armed with Dupire formula one can proceed to calibration of local volatility surface having it. Among practitioners it is preferred to use the Dupire formula as presented in (6) – (8) and (11).

In this case the standard procedure to calculate the local volatility surface from the implied volatility one is next:

1. Main requirements for the implied volatility surface are: C1 continuity w.r.t. time axis, C2 continuity w.r.t. strike axis. Thus to get a sufficiently smooth surface, we have to use linear interpolation on time axis and spline/2-nd or 3-rd order polynomial interpolation on strike axis;

2. Calibrate the implied volatility independently for each pair (T,K) using a Black-Scholes formula and gradient-based optimization algorithm;

3. Obtain the values of implied volatility at intermediate points (t,k) employing an interpolation method to the points obtained in the previous step;

4. Use Dupire formula to obtain the local volatility from now differentiable (smooth) implied volatility surface.

However, Ben-Hamida and Cont claim the instability of Dupire’s formula based methods [1]: Dupire presents a formula for reconstructing local volatility functions from a continuum of call option prices; however, this formula involves taking derivatives from discrete data and is numerically unstable. A discretized version of the Dupire formula is the implied tree method of Derman et al [1], which is prone to similar instabilities leading to «negative probabilities».

Alternatively we will discover their approach that avoids calculation of implied volatility and calibrate the local volatility surface directly using the genetic algorithm. The advantages of direct calibration of local volatility surface [1]:

(13)

No interpolation of option prices or implied volatilities is required. Contrary to implied tree (Derman et al) methods or methods based on the Dupire formula, it does not require call–put prices for all strikes or maturities nor any ad hoc interpolation of observed prices.

They can therefore be applied to index options where the number of observations is large and also to equity options for which less amount of data is present;

Avoidance of computing the high-dimensional gradient of the objective function;

Do not require convexity of the objective function;

Provides a population of solutions: it means that this approach rather reflects the ill-posedness of the problem and its uncertainty than eliminates it.

So instead of using the standard approach i.e. calibrate and interpolate the implied volatilities, we will work on direct calibration of local volatility surface by solving the Dupire PDE for each pair (T,K) being given the input parameter σ(T,K). The surface σ(T,K) will nevertheless be introduced as a combination of B-Splines and the genetic algorithm will be applied not directly to σ(K,T) but to parameters of the spline representing it. The usage of it will become evident in Section 3.3, where we introduce the Dupire pricer.

3.2. Dupire theory 

We consider the Black-Scholes model of asset price evolution

, (9) where the volatility of the underlying is supposed to be constant. As

volatility is the only non-observable parameter in this model, we can determine it for a quoted option (T,K). The volatility obtained in this way is called Implied volatility.

However we can see that the volatility is pretty much dependent on strike and maturity of an option and is not constant at all (phenomena of «volatility smile»). That is why we look for another representation of local volatility. Many articles propose next diffusion process:

, . (10)

Dupire proposed a PDE that governs the price of the whole grid of the options. Local volatility σ(St,t) is a coefficient of this PDE.

(14)

Dupire equation is formulated using the equation of Fokker- Planck. We consider case with no dividends. The Dupire equation:

, , , , (11)

with a set of conditions:

, max , 0

lim ,

lim , 0

In Section 3.3 we present the numerical solution of Dupire equation.

3.3. Implicit schema solution 

In the Section 3.1 we have described two ways to obtain the local volatility: the standard approach that uses Dupire formula to pass from implied volatilities to local volatilities and the direct calibration of local volatilities (implies solving the Dupire equation for every pair (T,K) of interest). As the first approach has some numerical instabilities we will use the direct calibration.

To obtain a solution of Dupire PDE for an input σ(K,T) we have to pass through several steps:

1. Logarithmic change of variable (transform the PDE and the set of conditions);

2. Introduce new space discretization – we will use non-uniform discretization;

3. Introduce time discretization – we use theta-schema with θ = 0 that is equivalent to implicit discretization schema;

4. Write the PDE in terms of new operators and obtain the tridiagonal system of equations;

5. Obtain the resulting matrix of option prices by solving the tridiagonal system with Thomas algorithm.

The logarithmic change of variable [7]: We do change of variable y = ln(K) and U(y,T) = V(K,T). New system to solve:

, 0

,

, 0

, max , 0

(15)

Grid discretization where

, , , . (12)

With Uniform discretization case [7]:

,

, 2

1 2

, 2

,

,

,

, 2

1 2

, 2 Non-Uniform discretization case [7]:

,

, 1

2

, 2

,

, 1 1 1

2

, 2

1 1

,

, 1

2

, 2 where the ingredients are:

we define the function 1

2 /

and let 2

(uniform case) or

  (non-uniform case) we define hi = yi+1 − yi.

Time discretization (theta-schema)

1 0

max , 0

(16)

System to solve (implicit schema when θ = 0)

where

I is a unary matrix (all values are equal to 1);

k is a δT discretization step on time axis;

e1 is a vector with first value equal to 1 and all others equal to 0 (to reserve the initial condition’s role).

The tridiagonal system representation

And is presented as

=

(17)

Solve the tridiagonal system of equations (Generalized Thomas algorithm)

Apply Thomas algorithm of diagonalization to solve the system for each time step n.

Let us put into the system the values of σ(K,T) that we have used to simulate market prices. Finally we obtain the surface of Dupire prices of the call (Figure 1). The discrepancy between the prices obtained and the market prices is illustrated in the Figure 2.

Usage of non-uniform schema allows us to obtain better prices around the strike (5–10 % difference with real prices) when solving PDE. On Figure 2 we can see that the prices farther from the strike (out-of-money) are less precise and even non-coherent.

The values of σ(Ki,T) used are equivalent to values σ(yi,T).

Therefore we are interested to interpolate these values at each point i to put into Dupire PDE

Figure 1. Dupire price surface

u(K,T)

K T

(18)

Figure 2. Discrepancy surface solver

4. Representation of a local volatility surface 

4.1. B­splines representation of local volatility surface  We reconstruct the volatility surface by means of B-Splines to achieve two goals:

• Smooth representation of volatility surface (C2 continuity);

• Decrease the scale of the problem (minimize the number of inputs to optimization algorithm).

Let us define the local volatility function as a smooth polynomial defined completely by basis functions associated with n points («knots») and n − 4 corresponding «control points» (values at certain knots). Number of control points is always equals: m = n – p − 1 where p is a degree of the interpolating polynomial. This means that 2 knots at each bound do not have a corresponding control point.

For our case we define K-grid representation:

Control points is a set of θ – parameters of volatility curve to be found;

Basis functions are defined for p = 3 case at each of the knot intervals. Number of knots n is smaller than the discretization of the grid (to achieve the smaller scale of the optimization problem);

(u* (K,T)-market)/market

K T

(19)

In Ben-Hamida and Cont [1] representation knot intervals are equal

, ∑ , .

We obtain the values on T-grid by means of linear interpolation:

, , , . (13)

To obtain the spline of 3-rd order we can use the basis functions introduced in Ben-Hamida and Cont [1] or apply the generic De Boor’s algorithm to find Basis functions on uniform knots, which are presented in the Figure 3.

Figure 3. Basis functions on uniform knots

Let us reconstruct a pseudo-volatility function for next parameters:

X = [0 : 1 : 10] is a knot vector containing 11 uniformly distanced values;

x = rand(1,11) random control points corresponding to the knots;

p = 3 degree of polynomial.

We want to evaluate the B-Spline in the points k = 1 : 0.001 : 10 and present it in Figure 4.

(20)

Figure 4. Spline evaluated at intermediate points

4.2. Ill­posedness and smoothness requirements   (optional improvement) 

The main difficulty with solving the inverse problem (calibration problem) arises when we are looking for an input to the Dupire pricer introduced in the Section 2.2. Basically we could start from any random points σ(K,T) corresponding to the points known from market data. However this naive approach has several drawbacks:

The dimension of the problem will be equal to N M where N, M are sizes of discretization grid w.r.t. time and strike respectively. It will require an enormous computational cost (too much inputs into genetic algorithm);

The ill-posedness of the problem causes the existence of infinite number of solutions and the naive representation will lead us to instability of the solution obtained.

Ill-posedness of the problem can be defined as non-continuity of the dependence of local volatility function on market data. It means

(21)

that small perturbations in the price data will result in large changes in minimizing function and any solution obtained will be unstable (especially for the volatility calibrated for short maturities). Ill- posedness implies the non-uniqueness of solution as well.

To put some restriction on the solutions accepted Ben-Hamida and Cont introduce the smoothness requirement. Authors [1] propose to use smoothness semi-norm to quantify the smoothness of the local volatility surface:

dx dx d dT dx d

T

i T A

A

2 2 2 2 0

2 σ σ

σ =

∑ ∫ ∫

+ (14)

or in matrix form:

||σ||2 = θAθT, (15)

where θ is a matrix of control points in B-Spline representation.

Despite we will not use the smoothness requirement in our solution we will introduce the approach used by Ben-Hamida and Cont to construct the multivariate Gaussian random vector from the set of standard Gaussian random variables. It can be potentially used to create a set of Gaussian vectors applied in affine transformation of random mutation to improve the convergence and assure some acceptable only solutions.

Density of multivariate Gaussian by definition:

( )

( µ) ( µ)

π

= e x TA x x A

f

1 2 1

2

1 , (16)

where det(A) ≠ 0 (A – is a covariance matrix and it has to be invertible).

Here (x−µ)TA−1(x−µ) is a quadratic form equivalent to (15) in our case.

By the affine property any affine transformation of a Gaussian is a Gaussian i.e.

X N(µ,A) BX+b N(Bµ + b, BABT) (17)

(22)

where X is a Gaussian vector.

Hence the random vector is constructed as following: let set of independent X1 ...Xn N(0,1) X N(0,I) where I is identity matrix.

Then:

BX + µ N(µ,A) (18)

where A = BBT.

The Gaussian B then can be used in mutation kernel that is equivalent to the affine transformation (18) with µ = 0.

Matrix B has to be found by applying Cholesky factorization to positive semi-definite matrix A.

4.3. Algorithm description 

In the Figure 5 we describe the workflow of our program. Firstly we generate the initial population of control points (as volatility surface is parametrized with control points of B-splines on strike axis and linear interpolation on time axis) for i 0..I and m 0..M, where M – number of control points on strike axis and I – number of linear interpolation points on time axis.

Then for each member of population B-spline is built with help of such control points and σ(t,x)p is obtained for all t 0..T and x 0..K, where T,K – discretization numbers by time and strike respectively, p – index of population member.

Then σ is inserted do Dupire equation solver and solution (prices) C (t,x)p is obtained for each t,x on discretization grid for each p.

Prices are compared to real market prices for each t,x and for each p and the sum of squared differences is considered to measure the fitness of each p-th candidate. If the desired fitness is not found we pass to next step otherwise program terminates.

Candidate matrices of control points then are subject to genetic transformations (selection and mutation) based on the fitness of corresponding solutions for each p. New population is formed and algorithm repeats.

Remark: The market data is simulated from Black-Scholes formula to achieve better flexibility in testing calibration for different values of parameter σ.

(23)

Figure 5. Program workflows

4.4. Algorithm setting  Model settings:

0.005;  0.005  . .200 , where . ;

0; 0  . .5 , where ;

discK = 50 (strike axis discretization);

discT = 20 (time axis discretization);

S = 100 (spot price);

r = 0 (rate).

Genetic algorithm settings:

T0 = 1000 (initial temperature);

t = 0.03,0.05,0.07 (mutation rate varying for experiments);

Nmc = 214 (maximum number of simulations);

M = 64 (temperature decreasing frequency);

p = 20 (population size);

Stopping criteria: G(θ) < 15.

Miscellaneous:

Market data Ct (Ti,Kj) is simulated from: σ = 0.2 for all K,T using Black-Scholes model;

(24)

Quality criteria: we use measure Q (Section 2.3) to qualify obtained results.

4.5. Fitness function 

The weighted sum of squared differences was chosen as cost (fitness) function to minimize. The sum is taken over all strike/maturity pairs indexed i [1..DiscT], j [1..DiscK]

respectively and weighted on weight function

, . √ exp 0.5 . , (19)

where

/ . (20)

If we refer to function (2) the weight wi,j is obtained as

, ,

, . (21)

Then we get to the following form of the cost function:

( ) ( ) ( ) ( ) ( ) ( )

∑∑ ∑

= = =

= DiscK

jj I

i J

j

j i t j i

E t weighti jj

j i weight K

T C K T S t C G

G

1 2

1 1

*

, , ,

, , , ,

inf θ

θ θ θ (22)

This cost function allows to assign a superior weight to solutions that are of our interest (close-to-strike).

5. Numerical results  5.1. Numerical results:  

uniform knots interpolation case 

The result of the calibration is a local volatility surface introduced on Figure 6 and Figure 7 (same figure / different view). Market prices simulated with σ = 0.2 for all K,T are represented by the purple plane surface. Calibrated local volatility surface oscillates around it. From the Figure 7 we can conclude that oscillation is less sharp around K = S = 100 while for far-from-the-money options it becomes even negative.

(25)

Figure 6. Local volatility surface

Figure 7. Local volatility surface: view K, σ

The surface represents the local volatility matrix σ(K,T). We studied the frequency of appearance of σ in different intervals. On Figure 8 we describe distribution of σ(K,T) from the whole matrix (i.e. for all K,T).

σ(K,T)

T K

σ(K,T)

K

(26)

The histogram is obtained using mutation rate m = 0.07. We can see that most of the values lay in the interval [0; 0.1] and [0.3; 0.5].

Figure 8. Local volatility surface: Histogram over all values

Then the analysis is narrowed to the values of σ(K,T) for all Kj, where index j ∈ [20; 30], while at-the-money value is represented by index j = 25.

Looking at only close-to-the-money options we notice that most of the values of σ lay in the interval [0.1; 0.4], as we can see at Figure 9.

Figure 9. Local volatility surface: Histogram over index of K ∈ [20; 30]

σ(K,T)

σ(K,T)

(27)

Quality criteria for these results is presented on Figure 10.

Figure 10. Results of the calibration

Three experiment sets are introduced: each set is executed with different mutation rate parameter and contains three similar experiments. In terms on number of iterations and estimated quality measure the mutation rate m = 0.07 turns out to be the best to reconstruct the local volatility surface.

6. Conclusions 

In current paper we have introduced the state of the art in local volatility calibration. It is based on results of Ben-Hamida, Cont, Dupire, Del Morel, Cerf, articles and other research contributions. The genetic algorithm can be used as an alternative to gradient descent algorithm with smoothness requirement (problem regularization).

Genetic approach allows to reflect the multiplicity of solutions (as we cope with an ill-posed problem) rather than get rid of it by regularization.

B-splines and linear interpolation are used to represent local volatility surface on strike and maturity axis respectively.

(28)

The input of genetic algorithm is a set of parameters of B-splines.

Instead of optimizing directly the values of σ(K,T) we use these parameters that also allows us to reduce the dimensionality of the problem. These parameters θ are then passed to Dupire PDE pricer that returns matrix of option values for all K,T (strikes and maturities).

This matrix is used to evaluate the cost function by comparing these prices to simulated market prices. The set of B-spline parameters corresponding to accepted solution allows us to reproduce local volatility surface at desired points by interpolation.

We have implemented the algorithm and obtained coherent results.

Our algorithm is able to reconstruct efficiently the local volatility surface starting from random values of control points in B-spline representation of local volatility so basically we are not obliged to use a prior (suggested by Ben-Hamida and Cont) to have the algorithm converged (but it can improve the convergence).

The value m = 0.07 turns out to be the most efficient mutation rate (Figure 10). It reduces at most the number of iterations while increasing algorithm’s quality.

The quality measure estimating the percentage of solutions laying in 30 % threshold from realized values is between 36 % and 63 % (Figure 10). We also calculated that for 60 % threshold this percentage reaches 90 %. The quality measure however takes into account only close-to-strike options (Figure 9). If we take the whole grid we can see from (Figure 8) that much less results lay in this interval. However, in practice we are not interested in this purely theoretical conclusion as volatility value does not affect at all the prices of far-from-strike options.

It means that the algorithm is capable of reconstructing coherent volatility surface that can be used in future research. At the same time it reflects the multiplicity of solutions many of which satisfy quality criteria. However, to improve the quality we have to bring some additional modifications.

7. Propositions 

In further work we propose to extend the research of Ben-Hamida and Cont by using of De Boor algorithm of interpolation of parameters grid to obtain better convergence. The reason to try De Boor approach is that it allows us to apply genetic operators to control points that are not uniform on strike axis. We will be able to apply it

(29)

to control points laying in a specific close-to-strike range. It is supposed to improve the convergence of the algorithm as far-for- strike solutions have very small weight in cost function.

We will also introduce different quality criteria that can include a measure of the quality of local volatility surface based on inter-strike and inter-maturity relations. Logically for the neighboring values of maturities or strikes (or both) the values of calibrated local volatilities should not differ a lot except at-the-money cases.

Generating of initial population from prior distribution proposed by Ben-Hamida and Cont can increase the speed of convergence.

According to the authors the implied volatility of in-the-money options can be a good initial guess for the entire population.

These and other techniques including usage of different discretization scheme, weight function, interpolation order and sigma’s positivity constraint (currently we obtain σ < 0 in far-from- strike cases) can significantly improve the convergence as well as improve quality measures obtained.

Obtained surfaces are used to produce a forecast of the «true»

volatility surface. Such predictive analysis can be conducted using supervised and non-supervised machine learning techniques.

Література 

1. Ben Hamida S. Recovering volatility from option prices by evolutionary optimization / S. Ben Hamida, R. Cont // Journal of computational finance. – 2013. – No. 8. – Vol. 4. – P. 1–45. – Режим дос- тупу: http://www.proba.jussieu.fr/pageperso/ramacont/papers/evolution.pdf.

2. Dupire B. Pricing with a smile / B. Dupire // Risk. – 1994. – No. 7. – Vol. 1. – P. 18–20.

3. Cerf R. Asymptotic convergence of genetic algorithms / R. Cerf //

Advances in Applied Probability. – 1997. – No. 30. – Vol. 2. – P. 521–550. – Режим доступу: https://www.math.u-psud.fr/cerf/papers/gae.pdf.

4. Del Moral P. Asymptotic results for genetic algorithms with applications to non-linear estimation / P. Del Moral, L. Miclo // Theoretical Aspects of Evolutionary Computing / L. Kallel, B. Naudts, A. Rogers, editors. – Berlin: Springer-Verlag. – 2001. – P. 439–493.

5. Dieterle F. Variable selection by genetic algorithms / F. Dieterle //

Multianalyte Quantifications by Means of Integration of Artificial Neural Networks, Genetic Algorithms and Chemometrics for Time-Resolved Analytical Data: PhD Thesis. – Tübingen, Germany: University of Tübingen. – 2003. – Режим доступу: http://www.frank-dieterle.de/phd/2_8_5.html.

(30)

6. Geraghty J. Genetic algorithms performance between different selection strategy in solving TSP / J. Geraghty, N. Mohd Razali [Електрон- ний ресурс] // Proceedings of the World Congress on Engineering (London, U.K.), 2011. – Режим доступу: https://www.researchgate.net/figure/

236179246_Genetic_Algorithms_Performance_Between_Different_Selectio n_Strategy_in_Solving_TSP.

7. Bonnans J.F. Estimation de la volatilit´e locale d’actifs financiers par une m´ethode d’inversion num´erique / J. F. Bonnans, J. M. Cognet, S. Volle [Електронний ресурс] // Rapport de recherche at Institut National de recherche en informatique et en automatique № 4648. – 2002. – 60 p. – Ре- жим доступу: https://hal.inria.fr/inria-00071937/document.

8. Lagnado R. A technique for calibrating derivative security pricing models numerical solution of an inverse problem / R. Lagnado, S. Osher //

Journal of computational finance. – 1997. – No. 1 – P. 13–25.

9. Jackson N. Computation of Deterministic Volatility Surfaces / N. Jackson, E. Süli, S. D. Howison // Journal of Computational Finance. – 1999. – Vol. 2. – No. 2. – P. 5–32.

10. Li Y. Reconstructing the unknown local volatility function / Y. Li, T. F. Coleman, A. Verma // Journal of computational finance. – 1999. – No. 2. – P. 77–102.

References 

1. Ben Hamida, S., & Cont, R. (2013). Recovering volatility from option prices by evolutionary optimization. Journal of computational finance, 8(4), 1-45. Retrieved from http://www.proba.jussieu.fr/pageperso/ramacont/

papers/evolution.pdf.

2. Dupire, B. (1994). Pricing with a smile. Risk, 7(1), 18–20.

3. Cerf, R. (1997). Asymptotic convergence of genetic algorithms.

Advances in Applied Probability, 30(2), 521–550. Retrieved from https://www.math.u-psud.fr/cerf/papers/gae.pdf.

4. Del Moral, P., & Miclo, L. (2001). Asymptotic results for genetic algorithms with aplications to non-linear estimation. In L. Kallel, B. Naudts

& A. Rogers (Eds.), Theoretical Aspects of Evolutionary Computing (pp. 439–493). Berlin, Cermany: Springer-Verlag.

5. Dieterle, F. (2003). Variable selection by genetic algorithms. In Multianalyte Quantifications by Means of Integration of Artificial Neural Networks, Genetic Algorithms and Chemometrics for Time-Resolved Analytical Data: PhD Thesis. Tübingen, Germany: University of Tübingen.

Retrieved from http://www.frank-dieterle.de/phd/2_8_5.html.

6. Geraghty, J., & Mohd Razali, N. (2011, July 6-8). Genetic algorithms performance between different selection strategy in solving TSP.

Proceedings of the World Congress on Engineering (London, U.K.).

Retrieved from https://www.researchgate.net/figure/236179246_Genetic_

(31)

Algorithms_Performance_Between_Different_Selection_Strategy_in_Solvin g_TSP.

7. Bonnans, J. F., Cognet, J. M., & Volle, S. (2002, November).

Estimation de la volatilité locale d’actifs financiers par une méthode d’inversion numérique. Rapport de recherche at Institut National de recherche en informatique et en automatique No. 4648. Retrieved from https://hal.inria.fr/inria-00071937/document.

8. Lagnado, R., & Osher, S. (1997). A technique for calibrating derivative security pricing models numerical solution of an inverse problem.

Journal of computational finance, 1, 13–25.

9. Jackson, N., Süli, E., & Howison, S. (1999). Computation of deterministic volatility surfaces. Journal of Computational Finance, 2(2), 5–32. DOI: 10.21314/JCF.1998.022.

10. Li, Y., Coleman, T. F., & Verma, A. (1999). Reconstructing the unknown local volatility function. Journal of computational finance, 2, 77–102.

Стаття надійшла до редакції 25.05.2018 

Посилання

СУПУТНІ ДОКУМЕНТИ

No impact of concrete strength and initial stress level was detected at their joint action with temperature load on the final strength of con- crete

Результати досліджень температурних полів та фазових переходів В зоні впливу лазерного випромінення на водо насичений бетон виявлені

According to the Decision of the Constitutional Court of Ukraine in the case of official interpretation of Articles 3, 23, 31, 47, 48 of the Law of Ukraine “On Information”

According to it, without prejudice to the rights of the creditors of the grantor holders of a right to follow property that derives from a security right

For example, in Ukraine, use of physical restraint and (or) isolation in the provision of psychiatric care to convicts suf- fering from mental disorders is carried out in accordance

Оцінка потенціалу будь-якого бізнесу є невід’ємною складовою його економічної, виробничої, інвестиційної, інноваційної діяльності тощо. Проте,

The concept of development of deliverology at the local level proposed in this pub- lication, based on the performance indicators of local budget programs in regions, is necessary

Using fuzzy logic methods to develop an efficient and economical algorithm for controlling a small light mobile robot based on the fuzzification of the local terrain

The list of tasks to be completed: analysis of existing drivetrain solutions of electric vehicles, an overview of their main elements and parts; electric vehicle traction motor

У статті розроблено систематичний метод обчислення наближеної ціни для широкого класу цінних паперів за допомогою інструментів

This means that in contrast to the global protection algorithm, where a reserve route is created for each primary route, in the local rerouting algorithm one reserve tunnel is

Finally, we found a local rule involving only the evolution of 24 configurations allowing a glider moving down and right to bounce from above on a horizontal wall.. This local rule

The main advantage of our pricing methodology is that, by combining methods in spectral theory, regular perturbation theory, and singular perturbation theory, we reduce everything

The aim of the article is to show how the application of a managerial accounting approach in the public sector can contribute to the introduction of deliverology at the local

At the same time, the full functioning of the local self-government institution in Ukraine is also hampered by financial and economic independence of territorial

State can be called ecological if it meets certain criteria, when certain conditions are laid for its functioning: guarantee of environmental rights and

5- The conditions and ways of using power electronics in distribution networks with local sources of energy generation and storage are substantiated in order to form

Along with functional tests that reflect the real dynamics of mental processes under the influence of shifts in psychical states, psychologists are widely used to

It has been determined that budgetary decentralization increased the powers of local authorities, influencing the revenue base of local budgets, ensured not only independence in

The Political, Social, Economic and Cultural Development of Ukraine in the Period of its Independence.

On the background of ongoing antihypertensive therapy with Amlodipine 5 mg, Hydrochlorothiazide 12,5 mg and Valsartan 160 mg in combined form “Tiara-Trio”, taken in the morning,

Together with this, when the person lost the status but continues to live on the administrative and territorial community, where the person was registered, the person obtains

The standard [1] also uses formula (8), but only for the case of determining the permissible distance from the SA to the protected equipment, if the equipment insulation