Showing posts with label Interest Rate Derivatives. Show all posts
Showing posts with label Interest Rate Derivatives. Show all posts

Thursday, November 28, 2019

QuantLib-Python: Heston Monte Carlo Valuation for Autocallable Memory Coupon Note

In the spirit of the previous post, I was woodshedding an implementation for valuing Autocallable Memory Coupon note by using libraries available in QuantLib-Python. These products are embedding a series of out-of-the-money barrier options and for this specific reason, it is important to capture implied volatility smile by using appropriate model. For this implementation example, Heston stochastic volatility model has been used. In addition to the actual Monte Carlo algorithm and path generator, I also implemented a simple method for calibrating Heston model to volatility surface by using SciPy optimization package. The complete program can be downloaded from my GitHub page.

Autocallable Notes


Autocallable notes are path-dependent structured products, which may be linked to any index. However, this product is usually linked to equity index or equity basket. If condition called autocall barrier will be reached on any coupon date, the product will be terminated immediately, after paying back coupon (usually fixed) and notional amount. However, if the underlying index will not reach autocall barrier on a coupon date, but reaches condition called coupon barrier, only fixed coupon will be paid on notional and the product will be kept alive until maturity (or until the next autocall event). Autocallables may also have a memory, which means that if the underlying index was below coupon barrier in the previous coupon date (and there was no coupon payment), but will be above coupon barrier in the next coupon date, the product will pay all cumulated (and unpaid) coupons on notional in the next coupon date. Autocallable products are (usually) not capital protected, which means that the investor is not guaranteed to receive the initial investment. If the index fixing on a final coupon date will be below condition called protection barrier, the final redemption amount will be calculated according to specific formula and investor will lose some of the invested capital. There is an example brochure and term sheet available for this product in my GitHub page.

Library Imports


import QuantLib as ql
import numpy as np
import scipy.optimize as opt

Heston Path Generator


Below is a simple (hard-coded) method for generating paths by using Heston process for a given set of QuantLib dates, which can be unevenly distributed. A couple of notes about Heston path generation process in general. QuantLib MultiPathGenerator has to be used for simulating paths for this specific process, because "under the hood", QuantLib simulates two correlated random variates: one for the asset and another for the volatility. As a result for "one simulation round", MultiPath object will be received from generator. This MultiPath object contains generated paths for the both asset and volatility (multiPath[0] = asset path, multiPath[1] = volatility path). So, for any further purposes (say, to use simulated asset path for something), one can just use generated asset path and ignore volatility path. However, volatility path is available, if there will be some specific purpose for obtaining the information on how volatility has been evolving over time.

# hard-coded generator for Heston process
def HestonPathGenerator(dates, dayCounter, process, nPaths):
    t = np.array([dayCounter.yearFraction(dates[0], d) for d in dates])
    nGridSteps = (t.shape[0] - 1) * 2
    sequenceGenerator = ql.UniformRandomSequenceGenerator(nGridSteps, ql.UniformRandomGenerator())
    gaussianSequenceGenerator = ql.GaussianRandomSequenceGenerator(sequenceGenerator)
    pathGenerator = ql.GaussianMultiPathGenerator(process, t, gaussianSequenceGenerator, False)
    paths = np.zeros(shape = (nPaths, t.shape[0]))
    
    for i in range(nPaths):
        multiPath = pathGenerator.next().value()
        paths[i,:] = np.array(list(multiPath[0]))
        
    # return array dimensions: [number of paths, number of items in t array]
    return paths

Heston Model Calibration


Below is a simple (hard-coded) method for calibrating Heston model into a given volatility surface. Inside this method, process, model and engine are being created. After this, calibration helpers for Heston model are being created by using given volatility surface data. Finally, calibrated model and process are being returned for any further use. The actual optimization workhorse will be given outside of this method. For this specific example program, SciPy's Differential Evolution solver is being used, in order to guarantee global minimization result.

# hard-coded calibrator for Heston model
def HestonModelCalibrator(valuationDate, calendar, spot, curveHandle, dividendHandle, 
    v0, kappa, theta, sigma, rho, expiration_dates, strikes, data, optimizer, bounds):
    
    # container for heston calibration helpers
    helpers = []
    
    # create Heston process, model and pricing engine
    # use given initial parameters for model
    process = ql.HestonProcess(curveHandle, dividendHandle, 
        ql.QuoteHandle(ql.SimpleQuote(spot)), v0, kappa, theta, sigma, rho)
    model = ql.HestonModel(process)
    engine = ql.AnalyticHestonEngine(model)
    
    # nested cost function for model optimization
    def CostFunction(x):
        parameters = ql.Array(list(x))        
        model.setParams(parameters)
        error = [helper.calibrationError() for helper in helpers]
        return np.sqrt(np.sum(np.abs(error)))

    # create Heston calibration helpers, set pricing engines
    for i in range(len(expiration_dates)):
        for j in range(len(strikes)):
            expiration = expiration_dates[i]
            days = expiration - valuationDate
            period = ql.Period(days, ql.Days)
            vol = data[i][j]
            strike = strikes[j]
            helper = ql.HestonModelHelper(period, calendar, spot, strike,
                ql.QuoteHandle(ql.SimpleQuote(vol)), curveHandle, dividendHandle)
            helper.setPricingEngine(engine)
            helpers.append(helper)
    
    # run optimization, return calibrated model and process
    optimizer(CostFunction, bounds)
    return process, model

Monte Carlo Valuation


The actual Autocallable valuation algorithm has been implemented in this part of the code. For the sake of being able to value this product also after its inception, valuation method takes past fixings as Python dictionary (key: date, value: fixing). Now, if one changes valuation date to any possible date after inception and before transaction maturity, the product will also be valued accordingly. It should be noted, that in such scheme it is crucial to provide all possible past fixings. Failure to do this, will lead to an exception. Algorithm implementation is a bit long and might look scary, but it is actually really straightforward if one is familiar enough with this specific product.

def AutoCallableNote(valuationDate, couponDates, strike, pastFixings, 
    autoCallBarrier, couponBarrier, protectionBarrier, hasMemory, finalRedemptionFormula, 
    coupon, notional, dayCounter, process, generator, nPaths, curve):    
    
    # immediate exit trigger for matured transaction
    if(valuationDate >= couponDates[-1]): return 0.0
    
    # immediate exit trigger for any past autocall event
    if(valuationDate >= couponDates[0]):
        if(max(pastFixings.values()) >= (autoCallBarrier * strike)): return 0.0

    # create date array for path generator
    # combine valuation date and all the remaining coupon dates
    dates = np.hstack((np.array([valuationDate]), couponDates[couponDates > valuationDate]))
    
    # generate paths for a given set of dates, exclude the current spot rate
    paths = generator(dates, dayCounter, process, nPaths)[:,1:]
    
    # identify the past coupon dates
    pastDates = couponDates[couponDates <= valuationDate]

    # conditionally, merge given past fixings from a given dictionary and generated paths
    if(pastDates.shape[0] > 0):
        pastFixingsArray = np.array([pastFixings[pastDate] for pastDate in pastDates])        
        pastFixingsArray = np.tile(pastFixingsArray, (paths.shape[0], 1))
        paths = np.hstack((pastFixingsArray, paths))
    
    # result accumulator
    global_pv = []
    expirationDate = couponDates[-1]
    hasMemory = int(hasMemory)
    
    # loop through all simulated paths
    for path in paths:
        payoffPV = 0.0
        unpaidCoupons = 0
        hasAutoCalled = False
        
        # loop through set of coupon dates and index ratios
        for date, index in zip(couponDates, (path / strike)):
            # if autocall event has been triggered, immediate exit from this path
            if(hasAutoCalled): break
            payoff = 0.0
                
            # payoff calculation at expiration
            if(date == expirationDate):
                # index is greater or equal to coupon barrier
                # pay 100% redemption, plus coupon, plus conditionally all unpaid coupons
                if(index >= couponBarrier):
                    payoff = notional * (1 + (coupon * (1 + unpaidCoupons * hasMemory)))
                # index is greater or equal to protection barrier and less than coupon barrier
                # pay 100% redemption, no coupon
                if((index >= protectionBarrier) & (index < couponBarrier)):
                    payoff = notional
                # index is less than protection barrier
                # pay redemption according to formula, no coupon
                if(index < protectionBarrier):
                    # note: calculate index value from index ratio
                    index = index * strike
                    payoff = notional * finalRedemptionFormula(index)
                
            # payoff calculation before expiration
            else:
                # index is greater or equal to autocall barrier
                # autocall will happen before expiration
                # pay 100% redemption, plus coupon, plus conditionally all unpaid coupons
                if(index >= autoCallBarrier):
                    payoff = notional * (1 + (coupon * (1 + unpaidCoupons * hasMemory)))
                    hasAutoCalled = True
                # index is greater or equal to coupon barrier and less than autocall barrier
                # autocall will not happen
                # pay coupon, plus conditionally all unpaid coupons
                if((index >= couponBarrier) & (index < autoCallBarrier)):
                    payoff = notional * (coupon * (1 + unpaidCoupons * hasMemory))
                    unpaidCoupons = 0
                # index is less than coupon barrier
                # autocall will not happen
                # no coupon payment, only accumulate unpaid coupons
                if(index < couponBarrier):
                    payoff = 0.0
                    unpaidCoupons += 1                    

            # conditionally, calculate PV for period payoff, add PV to local accumulator
            if(date > valuationDate):
                df = curveHandle.discount(date)
                payoffPV += payoff * df
            
        # add path PV to global accumulator
        global_pv.append(payoffPV)
        
    # return PV
    return np.mean(np.array(global_pv))

Main Program


Finally, in this part of the program we will actually use the stuff presented above. First, the usual QuantLib-related parameters are being created, as well as parameters for Autocallable product. Note, that since in this example we are valuing this product at inception, there is no need to provide any past fixings for valuation method (however, it is not forbidden either). After this, interest rate curve and dividend curve will be created, as well as volatility surface data for calibration purposes. Next, initial guesses for Heston parameters will be set and the model will then be calibrated by using dedicated method. Finally, calibrated process will be given to the actual valuation method and Monte Carlo valuation will be processed.

# general QuantLib-related parameters
valuationDate = ql.Date(20,11,2019)
ql.Settings.instance().evaluationDate = valuationDate
convention = ql.ModifiedFollowing
dayCounter = ql.Actual360()
calendar = ql.TARGET()

# Autocallable Memory Coupon Note
notional = 1000000.0
spot = 3550.0
strike = 3550.0
autoCallBarrier = 1.0
couponBarrier = 0.8
protectionBarrier = 0.6
finalRedemptionFormula = lambda indexAtMaturity: min(1.0, indexAtMaturity / strike)
coupon = 0.05
hasMemory = True

# coupon schedule for note
startDate = ql.Date(20,11,2019)
firstCouponDate = calendar.advance(startDate, ql.Period(1, ql.Years))
lastCouponDate = calendar.advance(startDate, ql.Period(7, ql.Years))
couponDates = np.array(list(ql.Schedule(firstCouponDate, lastCouponDate, ql.Period(ql.Annual), 
    calendar, ql.ModifiedFollowing, ql.ModifiedFollowing, ql.DateGeneration.Forward, False)))

# create past fixings into dictionary
pastFixings = {}
#pastFixings = { ql.Date(20,11,2020): 99.0, ql.Date(22,11,2021): 99.0 }

# create discounting curve and dividend curve, required for Heston model
curveHandle = ql.YieldTermStructureHandle(ql.FlatForward(valuationDate, 0.01, dayCounter))
dividendHandle = ql.YieldTermStructureHandle(ql.FlatForward(valuationDate, 0.0, dayCounter))

# Eurostoxx 50 volatility surface data
expiration_dates = [ql.Date(19,6,2020), ql.Date(18,12,2020), 
    ql.Date(18,6,2021), ql.Date(17,12,2021), ql.Date(17,6,2022),
    ql.Date(16,12,2022), ql.Date(15,12,2023), ql.Date(20,12,2024), 
    ql.Date(19,12,2025), ql.Date(18,12,2026)]

strikes = [3075, 3200, 3350, 3550, 3775, 3950, 4050]

data = [[0.1753, 0.1631, 0.1493, 0.132 , 0.116 , 0.108 , 0.1052],
       [0.1683, 0.1583, 0.147 , 0.1334, 0.1212, 0.1145, 0.1117],
       [0.1673, 0.1597, 0.1517, 0.1428, 0.1346, 0.129 , 0.1262],
       [0.1659, 0.1601, 0.1541, 0.1474, 0.1417, 0.1381, 0.1363],
       [0.1678, 0.1634, 0.1588, 0.1537, 0.1493, 0.1467, 0.1455],
       [0.1678, 0.1644, 0.1609, 0.1572, 0.1541, 0.1522, 0.1513],
       [0.1694, 0.1666, 0.1638, 0.1608, 0.1584, 0.1569, 0.1562],
       [0.1701, 0.168 , 0.166 , 0.164 , 0.1623, 0.1614, 0.161 ],
       [0.1715, 0.1698, 0.1682, 0.1667, 0.1654, 0.1648, 0.1645],
       [0.1724, 0.171 , 0.1697, 0.1684, 0.1675, 0.1671, 0.1669]]

# initial parameters for Heston model
theta = 0.01
kappa = 0.01
sigma = 0.01
rho = 0.01
v0 = 0.01

# bounds for model parameters (1=theta, 2=kappa, 3=sigma, 4=rho, 5=v0)
bounds = [(0.01, 1.0), (0.01, 10.0), (0.01, 1.0), (-1.0, 1.0), (0.01, 1.0)]

# calibrate Heston model, print calibrated parameters
calibrationResult = HestonModelCalibrator(valuationDate, calendar, spot, curveHandle, dividendHandle, 
        v0, kappa, theta, sigma, rho, expiration_dates, strikes, data, opt.differential_evolution, bounds)
print('calibrated Heston parameters', calibrationResult[1].params())

# monte carlo parameters
nPaths = 10000

# request and print PV
PV = AutoCallableNote(valuationDate, couponDates, strike, pastFixings, 
    autoCallBarrier, couponBarrier, protectionBarrier, hasMemory, finalRedemptionFormula, 
    coupon, notional, dayCounter, calibrationResult[0], HestonPathGenerator, nPaths, curveHandle)

print(PV)

Finally, thanks for reading this blog and have a pleasant wait for the coming Christmas.
-Mike

Sunday, November 17, 2019

QuantLib-Python: Monte Carlo Valuation for Target Accrual Redemption Note

Out of curiosity, I wanted to create an implementation for interest rate Target Accrual Redemption Note (TARN) by using QuantLib-Python library. Now, as one might be aware, the availability of QuantLib Monte Carlo framework for Python is limited (due to templatization of the original C++ classes) to a few existing implementations. This means, that one is not able to re-implement new MC valuation scheme directly by using this specific framework. What to do? There are some workarounds. One may implement such a new scheme by using QuantLib C++ library and then create a wrapper for this library by using SWIG. Some instructions on how to proceed with the SWIG path has also been presented in here. Another way is just to give up the usual QuantLib "Instrument/Engine"-paradigm and just use all the nice pieces available. The complete program can be downloaded from my GitHub page.

TARN


This product is a path-dependent structured note, which terminates when target coupon payment will be reached. When cumulative coupon amount reaches this target amount before maturity, the holder of the note receives final payment and the contract will terminate immediately. TARN coupon payoff is usually structured to be similar to inverse floating rate note. There is (usually) also an attractive teaser coupon attached in the beginning of the structure, combined with the possibility of getting back the par value relatively fast. If the future index will stay low or go even lower, target coupon will be reached early and the investor will enjoy the benefits of high returns for this short-lived investment. However, in the worst case, if the future index path will go sky high, investor will be "bleeding to death" with this long-dated investment yielding inferior returns.

Path Generator


For this specific implementation (for the sake of being as accurate as possible), I gave up using QuantLib Path Generator, because it will create stochastic paths only for even set of points in time. Below is a simple method, which generates paths for a given set of QuantLib dates, which can be unevenly distributed.

# path generator for a given 1d stochastic process and 
# a given set of QuantLib dates, which can be unevenly distributed
# uses process evolve method, which returns asset value after time interval Δt
# returns E(x0,t0,Δt) + S(x0,t0,Δt) ⋅ Δw, where E is expectation and S standard deviation
# input arguments:
#   x0 = asset value at inception
#   dates = array of dates
#   dayCounter = QuantLib day counter
#   process = QuantLib 1D stochastic process implementation
#   nPaths = number of paths to be simulated
def PathGenerator(x0, dates, dayCounter, process, nPaths):
    t = np.array([dayCounter.yearFraction(dates[0], d) for d in dates])    
    urg = ql.UniformRandomGenerator()
    ursg = ql.UniformRandomSequenceGenerator(t.shape[0] - 1, urg)
    grsg = ql.GaussianRandomSequenceGenerator(ursg)    
    paths = np.zeros(shape = (nPaths, t.shape[0]))
    
    for j in range(nPaths):
        dw = np.array(list(grsg.nextSequence().value()))
        x = x0
        path = []

        for i in range(1, t.shape[0]):
            x = process.evolve(t[i-1], x, (t[i] - t[i-1]), dw[i-1])
            path.append(x)
            
        path = np.hstack([np.array([x0]), np.array(path)])
        paths[j,:] = path
        
    # return array dimensions: [number of paths, number of items in t array]
    return paths

Main Program


First, we create the usual set of required QuantLib parameters, such as valuation date. For the sake of being able to value this product also after its inception, valuation method takes QuantLib Index object as one argument. All past fixing dates and rates can be stored into this index. Now, if one changes valuation date (currently 4.3.2018) to any possible date after inception and before transaction maturity, the product will be valued accordingly. It should be noted, that in such scheme it is crucial to provide all possible past fixings. Failure to do this, will lead to an exception.

For creating valuation curve (curveHandle) and short rate process (HW1F), I have decided to take the shortest possible way and just created flat forward curve and assumed constants for short rate process parameters (mean reversion and short rate volatility), instead calibrating this model into the actual market data. For any "real-life" purposes, one may take a look at some possible proper implementations given in here for valuation curve and in here for model calibration.

Next, all transaction parameters are being created one by one. Coupon dates are being created by using QuantLib Schedule object. Finally, PV and the average termination time point is requested from TARN method and printed.

import QuantLib as ql
import numpy as np

# define general QuantLib-related parameters
valuationDate = ql.Date(4,3,2018)
calendar = ql.TARGET()
convention = ql.ModifiedFollowing
dayCounter = ql.Actual360()
ql.Settings.instance().evaluationDate = valuationDate

# set index object and past fixings
pastFixingsDates = np.array([ql.Date(4,3,2019), ql.Date(4,3,2020)])
pastFixingsRates = np.array([0.05, 0.05])
index = ql.USDLibor(ql.Period(12, ql.Months))
index.clearFixings()
index.addFixings(pastFixingsDates, pastFixingsRates)

# create discounting curve and process for short rate
r0 = 0.015
curveHandle = ql.YieldTermStructureHandle(ql.FlatForward(valuationDate, r0, dayCounter))
a = 0.05
vol = 0.009
HW1F = ql.HullWhiteProcess(curveHandle, a, vol)

# define bond-related parameters
startDate = ql.Date(4,3,2018)
firstCouponDate = calendar.advance(startDate, ql.Period(1, ql.Years))
lastCouponDate = calendar.advance(startDate, ql.Period(10, ql.Years))
couponDates = np.array(list(ql.Schedule(firstCouponDate, lastCouponDate, ql.Period(ql.Annual), 
    calendar, ql.ModifiedFollowing, ql.ModifiedFollowing, ql.DateGeneration.Forward, False)))
teaserCoupon = np.array([0.1])
targetCoupon = 0.25
hasToReachTarget = True
cap = 0.15
floor = 0.0
fixedRate = 0.1
factor = 3.0
structuredCouponPayoff = lambda r: max(fixedRate - factor * r, 0.0)
notional = 1000000.0

# define monte carlo-related parameters
nPaths = 10000

# request result (PV and average termination point)
result = TARN(startDate, valuationDate, couponDates, targetCoupon, teaserCoupon,
    cap, floor, hasToReachTarget, structuredCouponPayoff, notional, dayCounter,
    nPaths, HW1F, curveHandle, index)

print('pv', '{0:.0f}'.format(result[0]))
print('termination', '{0:.1f}'.format(result[1]))

Valuation Method


The last piece of code shows the actual pricing method. Paths will be generated for all remaining coupon dates. Past fixing rates will be used for all such coupon dates, which happened in the past.

All simulated paths will then be processed in the loop. Thanks to amazing variety of different types of array operations in Numpy Array library, we can process a path without deploying any typical "inner loop" (for path steps). The loop basically starts with a given set of simulated rates and along the way, transforms this path into an array of cash flow present values. Operations are commented in the code.

def TARN(bondStartDate, valuationDate, couponDates, targetCoupon, teaserCoupon, cap, floor,
    hasToReachTarget, payoff, notional, dayCounter, nPaths, process, curve, index):    
    
    # immediate exit trigger for matured transaction
    if(valuationDate >= couponDates[-1]):
        return (0.0, 0.0)
    
    # create date array for path generator
    # combine valuation date and all remaining coupon dates
    dates = np.hstack((np.array([valuationDate]), couponDates[couponDates > valuationDate]))
    
    # generate paths for a given set of dates, exclude the current spot rate
    paths = PathGenerator(process.x0(), dates, dayCounter, process, nPaths)[:,1:]
    
    # identify past coupon dates
    pastDates = couponDates[couponDates <= valuationDate]
    # conditionally, merge given past fixings and generated paths
    if(pastDates.shape[0] > 0):
        pastFixings = np.array([index.fixing(pastDate) for pastDate in pastDates])    
        pastFixings = np.tile(pastFixings, (paths.shape[0], 1))
        paths = np.hstack((pastFixings, paths))
        
    # define time grid for all coupon dates, calculate day count fractions
    t = np.array([0.0] + [dayCounter.yearFraction(bondStartDate, d) for d in couponDates])
    dcf = np.diff(t)
    
    # result accumulators
    global_pv = []
    termination = []

    # calculate PV for all paths
    for path in paths:
        # transform simulated path into structured rates using payoff function
        path = (np.vectorize(payoff))(path)
        index_1 = np.where(teaserCoupon > 0.0)
        # replace some path rates with teaser coupons (if exists)
        path[index_1] = teaserCoupon
        # calculate capped and floored structured coupon for non-teaser rates
        path = np.concatenate([path[index_1], np.minimum(path[index_1[0].shape[0]:], cap)])
        path = np.concatenate([path[index_1], np.maximum(path[index_1[0].shape[0]:], floor)])
        # multiply rates with day count fractions
        path = np.multiply(path, dcf)
        # take into account only rates, for which cumulative sum is less or equal to target coupon
        index_2 = np.where(np.cumsum(path) <= targetCoupon)
        path = path[index_2]
        dates = couponDates[index_2]
        # path termination time is the date, which reaches target coupon
        termination.append(dayCounter.yearFraction(valuationDate, dates[-1]))
        # if coupon has to reach target coupon, add remaining coupon available into final coupon
        if(hasToReachTarget): path[-1] = targetCoupon - np.sum(path[:-1])
        # multiply coupon rates with notionals, add final redemption
        path *= notional
        path[-1] += notional
        # take into account only coupons, for which coupon dates are in the future
        index_3 = np.where(dates >= valuationDate)
        dates = dates[index_3]
        path = path[index_3]
        # request discount factors for all coupon dates
        df = np.array([curve.discount(d) for d in dates])
        # calculate coupon PV's
        path = np.multiply(path, df)
        # add path PV into result accumulator
        global_pv.append(np.sum(path))

    # return tuple (pv, average termination time)
    return (np.mean(np.array(global_pv)), np.mean(np.array(termination)))

A couple of final notes. The presented example is valuing a product in which the coupon is exposed to interest rates. However, a notable amount of TARN products issued (in the past), have been exposing their coupons to movements in FX rates. Since any one-dimensional process can be used for path generation purposes in the valuation method, there is a possibility to use some of the existing QuantLib processes also for modeling the path of future FX rates. Also, structured coupon payoff can be defined outside the valuation method. Since all types of TARN products are (usually) pretty much sharing the same other properties (ex. path-dependency, teaser rates), the valuation method is relatively "generic" after all.

Thanks for reading this blog.
-Mike

Sunday, October 20, 2019

Python-QuantLib-SciPy: Optimizing Smooth Libor Forward Curve Revisited

One reader was making a remark, that my implementation for curve calibration scheme as presented in here, was not implemented by using QuantLib. As I was re-thinking my implementation, I suddenly remembered that in QuantLib, there are actually several ways to create yield term structures. One such approach is to create yield term structure from a set of given dates and forward rates. Also, VanillaSwap class gives us a direct mechanism for valuing swap transaction by using constructed curve. Bingo. So, this post is re-visiting curve calibration scheme, but this time implemented by using relevant QuantLib-Python library tools.

A few words about utility classes. Convert class will be used for transforming specific in-built data types into specific QuantLib types (Date, Calendar, DayCounter, etc). VanillaSwap class is just a wrapper for corresponding QuantLib.VanillaSwap class. It should be noted, that there is conventional NPV method for requesting swap PV by giving two specific arguments (yield term structure and floating leg index). There is also another method NPV_calibration, which is used for calculating swap PV during calibration process. For this method, input arguments are list of forward rates, list of forward dates and list of initial market rates. From this information, specific yield term structure implementation (QuantLib.ForwardCurve) will be created and used for valuing swap transaction.

import re
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as pl
import QuantLib as ql

# utility class for different QuantLib type conversions 
class Convert():
    
    # convert date string ('yyyy-mm-dd') to QuantLib Date object
    def to_date(s):
        monthDictionary = {
            '01': ql.January, '02': ql.February, '03': ql.March,
            '04': ql.April, '05': ql.May, '06': ql.June,
            '07': ql.July, '08': ql.August, '09': ql.September,
            '10': ql.October, '11': ql.November, '12': ql.December
        }
        arr = re.findall(r"[\w']+", s)
        return ql.Date(int(arr[2]), monthDictionary[arr[1]], int(arr[0]))
    
    # convert string to QuantLib businessdayconvention enumerator
    def to_businessDayConvention(s):
        if (s.upper() == 'FOLLOWING'): return ql.Following
        if (s.upper() == 'MODIFIEDFOLLOWING'): return ql.ModifiedFollowing
        if (s.upper() == 'PRECEDING'): return ql.Preceding
        if (s.upper() == 'MODIFIEDPRECEDING'): return ql.ModifiedPreceding
        if (s.upper() == 'UNADJUSTED'): return ql.Unadjusted
        
    # convert string to QuantLib calendar object
    def to_calendar(s):
        if (s.upper() == 'TARGET'): return ql.TARGET()
        if (s.upper() == 'UNITEDSTATES'): return ql.UnitedStates()
        if (s.upper() == 'UNITEDKINGDOM'): return ql.UnitedKingdom()
        # TODO: add new calendar here
        
    # convert string to QuantLib swap type enumerator
    def to_swapType(s):
        if (s.upper() == 'PAYER'): return ql.VanillaSwap.Payer
        if (s.upper() == 'RECEIVER'): return ql.VanillaSwap.Receiver
        
    # convert string to QuantLib frequency enumerator
    def to_frequency(s):
        if (s.upper() == 'DAILY'): return ql.Daily
        if (s.upper() == 'WEEKLY'): return ql.Weekly
        if (s.upper() == 'MONTHLY'): return ql.Monthly
        if (s.upper() == 'QUARTERLY'): return ql.Quarterly
        if (s.upper() == 'SEMIANNUAL'): return ql.Semiannual
        if (s.upper() == 'ANNUAL'): return ql.Annual

    # convert string to QuantLib date generation rule enumerator
    def to_dateGenerationRule(s):
        if (s.upper() == 'BACKWARD'): return ql.DateGeneration.Backward
        if (s.upper() == 'FORWARD'): return ql.DateGeneration.Forward
        # TODO: add new date generation rule here

    # convert string to QuantLib day counter object
    def to_dayCounter(s):
        if (s.upper() == 'ACTUAL360'): return ql.Actual360()
        if (s.upper() == 'ACTUAL365FIXED'): return ql.Actual365Fixed()
        if (s.upper() == 'ACTUALACTUAL'): return ql.ActualActual()
        if (s.upper() == 'ACTUAL365NOLEAP'): return ql.Actual365NoLeap()
        if (s.upper() == 'BUSINESS252'): return ql.Business252()
        if (s.upper() == 'ONEDAYCOUNTER'): return ql.OneDayCounter()
        if (s.upper() == 'SIMPLEDAYCOUNTER'): return ql.SimpleDayCounter()
        if (s.upper() == 'THIRTY360'): return ql.Thirty360()
        
    # convert string (ex.'USDLibor.3M') to QuantLib ibor index object
    # note: forwarding term structure has to be linked to index object separately
    def to_iborIndex(s):
        s = s.split('.')
        if(s[0].upper() == 'USDLIBOR'): return ql.USDLibor(ql.Period(s[1]))
        if(s[0].upper() == 'EURIBOR'): return ql.Euribor(ql.Period(s[1]))  

class VanillaSwap(object):
    
    def __init__(self, ID, swapType, nominal, startDate, maturityDate, fixedLegFrequency, 
        fixedLegCalendar, fixedLegConvention, fixedLegDateGenerationRule, fixedLegRate, fixedLegDayCount,
        fixedLegEndOfMonth, floatingLegFrequency, floatingLegCalendar, floatingLegConvention, 
        floatingLegDateGenerationRule, floatingLegSpread, floatingLegDayCount, 
        floatingLegEndOfMonth, floatingLegIborIndex):

        # create member data, convert all required QuantLib types
        self.ID = str(ID)
        self.swapType = Convert.to_swapType(swapType)
        self.nominal = float(nominal)
        self.startDate = Convert.to_date(startDate)
        self.maturityDate = Convert.to_date(maturityDate)
        self.fixedLegFrequency = ql.Period(Convert.to_frequency(fixedLegFrequency))
        self.fixedLegCalendar = Convert.to_calendar(fixedLegCalendar)
        self.fixedLegConvention = Convert.to_businessDayConvention(fixedLegConvention)
        self.fixedLegDateGenerationRule = Convert.to_dateGenerationRule(fixedLegDateGenerationRule)
        self.fixedLegRate = float(fixedLegRate)
        self.fixedLegDayCount = Convert.to_dayCounter(fixedLegDayCount)
        self.fixedLegEndOfMonth = bool(fixedLegEndOfMonth)
        self.floatingLegFrequency = ql.Period(Convert.to_frequency(floatingLegFrequency))
        self.floatingLegCalendar = Convert.to_calendar(floatingLegCalendar)
        self.floatingLegConvention = Convert.to_businessDayConvention(floatingLegConvention)
        self.floatingLegDateGenerationRule = Convert.to_dateGenerationRule(floatingLegDateGenerationRule)
        self.floatingLegSpread = float(floatingLegSpread)
        self.floatingLegDayCount = Convert.to_dayCounter(floatingLegDayCount)
        self.floatingLegEndOfMonth = bool(floatingLegEndOfMonth)
        self.floatingLegIborIndex = Convert.to_iborIndex(floatingLegIborIndex)

        # create fixed leg schedule
        self.fixedLegSchedule = ql.Schedule(
            self.startDate, 
            self.maturityDate,
            self.fixedLegFrequency, 
            self.fixedLegCalendar, 
            self.fixedLegConvention,
            self.fixedLegConvention,
            self.fixedLegDateGenerationRule,
            self.fixedLegEndOfMonth)
        
        # create floating leg schedule
        self.floatingLegSchedule = ql.Schedule(
            self.startDate, 
            self.maturityDate,
            self.floatingLegFrequency, 
            self.floatingLegCalendar, 
            self.floatingLegConvention,
            self.floatingLegConvention,
            self.floatingLegDateGenerationRule,
            self.floatingLegEndOfMonth)

    # NPV method used for specific calibration purposes
    # x = list of forward rates
    # args: 0 = list of forward dates, 1 = list of market rates        
    def NPV_calibration(self, x, args):
        # concatenate given market rates and given forward rates
        x = np.concatenate([args[1], x])
        
        # create QuantLib yield term structure object
        # from a given set of forward rates and dates
        curve = ql.YieldTermStructureHandle(ql.ForwardCurve(args[0], x, 
            self.floatingLegDayCount, self.floatingLegCalendar))        

        # set forwarding term structure to floating leg index
        self.floatingLegIborIndex = self.floatingLegIborIndex.clone(curve) 
        
        # create vanilla interest rate swap
        self.instrument = ql.VanillaSwap(
            self.swapType, 
            self.nominal, 
            self.fixedLegSchedule, 
            self.fixedLegRate, 
            self.fixedLegDayCount, 
            self.floatingLegSchedule,
            self.floatingLegIborIndex,
            self.floatingLegSpread, 
            self.floatingLegDayCount)
        
        # pair instrument with pricing engine, request PV
        self.instrument.setPricingEngine(ql.DiscountingSwapEngine(curve))
        return self.instrument.NPV()
    
    # NPV method used for general pricing purposes
    def NPV(self, yieldTermStructureHandle, floatingLegIborIndex):
        # set forwarding term structure to floating leg index
        self.floatingLegIborIndex = floatingLegIborIndex.clone(yieldTermStructureHandle)
        
        # create vanilla interest rate swap
        self.instrument = ql.VanillaSwap(
            self.swapType, 
            self.nominal, 
            self.fixedLegSchedule, 
            self.fixedLegRate, 
            self.fixedLegDayCount, 
            self.floatingLegSchedule,
            self.floatingLegIborIndex,
            self.floatingLegSpread, 
            self.floatingLegDayCount)
        
        # pair instrument with pricing engine, request PV
        self.instrument.setPricingEngine(ql.DiscountingSwapEngine(yieldTermStructureHandle))
        return self.instrument.NPV()

ObjectiveFunction is used for minimization process for calculating sum of squared errors of all adjacent forward rates. Maximum smoothness of the curve will be achieved as a result of this minimization.

# objective function calculates sum of squared errors of all decision variables
# x = list of forward rates
# args: 0 = list of market rates data, 1 = scaling factor
def ObjectiveFunction(x, args):
    # concatenate given market rates and forward rates
    x = np.concatenate([args[0], x])
    return np.sum(np.power(np.diff(x), 2) * args[1])

The actual program flow will start by creating a set of vanilla interest rate swap objects (VanillaSwap) into list.

# dynamic data parts for set of vanilla swaps 
swapIDs = ['2Y', '3Y', '4Y', '5Y', '6Y', '7Y', '8Y', '9Y', '10Y', '12Y', '15Y', '20Y', '25Y', '30Y']
maturities = ['2010-02-08', '2011-02-07', '2012-02-06', '2013-02-06', '2014-02-06', '2015-02-06', '2016-02-08', 
    '2017-02-06', '2018-02-06', '2020-02-06', '2023-02-06', '2028-02-07', '2033-02-07', '2038-02-08']
swapRates = [0.02795, 0.03035, 0.03275, 0.03505, 0.03715, 0.03885, 0.04025, 0.04155, 0.04265, 0.04435, 
    0.04615, 0.04755, 0.04805, 0.04815]

# create vanilla swap transaction objects into list
swaps = [VanillaSwap(swapID, 'Payer', 1000000, '2008-02-06', maturity, 'Annual', 
    'Target', 'ModifiedFollowing', 'Backward', swapRate, 'Actual360', False, 'Quarterly', 
    'Target', 'ModifiedFollowing', 'Backward', 0.0, 'Actual360', False, 'USDLibor.3M')
    for swapID, maturity, swapRate in zip(swapIDs, maturities, swapRates)]

Next, the program will initialize the actual market data (to be used in forward curve), starting values for forward curve, set of dates for forward curve, optimization constraints and the actual optimization model.

# take initial forward rates from market data, set initial guesses and scaling factor for objective function
initialMarketData = np.array([0.03145, 0.0279275, 0.0253077, 0.0249374])
initialForwardGuesses = np.full(117, 0.02)
scalingFactor = 1000000.0

# create relevant QuantLib dates
today = ql.Date(4, 2, 2008)
ql.Settings.instance().evaluationDate = today  
settlementDate = ql.TARGET().advance(today, ql.Period(2, ql.Days))

# create set of dates for forward curve
dates = list(ql.Schedule(settlementDate, ql.Date(6, 2, 2038), ql.Period(ql.Quarterly), ql.TARGET(),
    ql.ModifiedFollowing, ql.ModifiedFollowing, ql.DateGeneration.Backward, False))

# create constraints for optimization model
swapConstraints = tuple([{'type': 'eq', 'fun': swap.NPV_calibration, 'args': [[dates, initialMarketData]]} for swap in swaps])

# create and execute scipy minimization model
model = opt.minimize(ObjectiveFunction, initialForwardGuesses, args = ([initialMarketData, scalingFactor]), 
    method = 'SLSQP', options = {'maxiter': 500}, constraints = swapConstraints)

After processing optimization model, the actual calibrated forward rates can be requested from the model. Next part of the program will just plot the calibrated forward term structure.

# extract calibrated forward rates, create times and plot term structure
forwards = np.concatenate([initialMarketData, model.x])
times = np.array([ql.Actual360().yearFraction(settlementDate, date) for date in dates])
pl.plot(times, forwards)
pl.show()















Final part of the program will create QuantLib yield term structure (QuantLib.ForwardCurve) and re-values our initial set of swap transactions. All swaps will be priced to zero at inception date.

# create new QuantLib curve from calibrated forward rates
curve = ql.YieldTermStructureHandle(ql.ForwardCurve(dates, forwards, ql.Actual360(), ql.TARGET()))

# value initial set of vanilla swaps using new QuantLib valuation curve
# all swaps will be priced to zero at inception date
for swap in swaps:
    index = ql.USDLibor(ql.Period(3, ql.Months), curve)
    pv = swap.NPV(curve, index)
    print(swap.ID, '{0:.5f}'.format(pv))

It should be noted, that by utilizing calibration scheme presented in this post, valuation curves (yield term structures) for QuantLib for all currencies and for all different basis can be constructed (assuming the relevant market data exists). Next logical step would be to implement similar scheme for calibrating valuation curves for QuantLib for cross-currency cases and within collateralized world.

Complete program can be downloaded from my GitHub page. Example data has been taken from chapter three within excellent book by Richard Flavell. Finally, thanks again for reading this blog.
-Mike

Wednesday, May 1, 2019

Python-SciPy: Optimizing Smooth Libor Forward Curve

In my previous post, I presented how to use Python SciPy Optimization package for solving zero-coupon rate term structure from a given set of zero-coupon bond prices numerically. In this post, the same approach will be used in order to solve smooth Libor forward curve from a given set of vanilla swaps. Example of applying this scheme using Solver and Excel/VBA can be found in here.

In order to value fixed income derivatives cash flows, relevant forward rates and discount factors have to be defined from bootstrapped zero-coupon curve. In a Libor world, we use cash and FRA contracts (or futures contracts) in a short-end of the curve, while in a long-end of the curve we use swap contracts. Let us assume for a moment, that we bootstrap USD 3M zero-coupon curve, in order to get 3M forward rates on a quarterly basis. While bootstrapping approach usually gives smooth curve within short-end of the curve, it will usually fail to do this within long-end of the curve. This happens, because there is no swap contracts available in a long-end of the curve and hereby, we need to do a lot of interpolations for required swap rates. The resulting forward curve then looks like a chain of waterfalls and for the long-end of the curve, smoothness will be lost. This topic has been thoroughly covered in chapter three within this excellent book by Richard Flavell.

In a nutshell, we first define the shortest end of the curve (using spot cash rate) and then we just guess the rest of the forward rates. After this, we minimize sum of squared differences between adjacent forward rates, subject to constraints. In this case, constraints are present values of swaps, which by definition will be zero. When this optimization task has been performed, the resulting forward curve will successfully re-price the current swap market.
















Bloomberg forward curve screen for the same set of data is available in here. Finally, thanks again for reading my blog.
-Mike

import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as pl

# sum of squared errors of all decision variables
# args: 0 = array of given initial rates, 1 = scaling factor
def ObjectiveFunction(x, args):
    # concatenate given initial rates and given 'guesses' for forward rates
    x = np.concatenate([args[0], x])
    return np.sum(np.power(np.diff(x), 2) * args[1])

# x = array of Libor forward rates
# args: 0 = swap rate, 1 = years to maturity, 2 = floating leg payments per year, 
# 3 = notional, 4 = array of given initial rates
def VanillaSwapPV(x, args):
    # PV (fixed payer swap) = -swapRate * Q(T) + (1 - DF(T))
    # where Q(T) is sumproduct of fixed leg discount factors and corresponding time accrual factors
    # assumption : fixed leg is always paid annually
    # since fixed leg is paid annually and maturity is always integer, 
    # time accrual factor will always be exactly 1.0
    
    # concatenate given initial rates and given 'guesses' for forward rates
    x = np.concatenate([args[4], x])
    DF = 0.0
    Q  = 0.0
    floatingLegTenor = 1 / args[2]
    nextFixedLegCouponDate = 1
    currentTimePoint = 0.0
    nCashFlows = int(args[1] * args[2])
    
    for i in range(nCashFlows):
        currentTimePoint += floatingLegTenor
        # first rate is always spot rate: calculate first spot discount factor
        if (i == 0):
            DF = 1 / (1 + x[i] * floatingLegTenor)
        # other rates are always forward rates
        if (i > 0):
            # solve current spot discount factor using previous spot discount factor
            # and current forward discount factor: DF(0,T) = DF(0,t) * DF(t,T), where (0 < t < T)
            DF *= (1 / (1 + x[i] * floatingLegTenor))
            if ((currentTimePoint - nextFixedLegCouponDate) == 0):
                Q += DF
                nextFixedLegCouponDate += 1
                
    return (-args[0] * Q + (1.0 - DF)) * args[3]

# array of given initial rates (the first given rate has to be spot cash rate)
# here, we give only one: 3M spot Libor fixing rate ('cash-to-first-future')
initialRates = np.array([0.006276])
scalingFactor = 1000000.0

# vanilla swap pricing functions as constraints
# args: swapRate, yearsToMaturity, floatingLegPaymentFrequency, notional, array of given initial rates
swaps = ({'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.0078775, 1, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.009295, 2, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.0103975, 3, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.01136, 4, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.012268, 5, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.013183, 6, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.01404, 7, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.014829, 8, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.01554, 9, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.016197, 10, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.016815, 11, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.017367, 12, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.018645, 15, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.01996, 20, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.020615, 25, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.02097, 30, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.021155, 40, 4, scalingFactor, initialRates]]},
        {'type': 'eq', 'fun': VanillaSwapPV, 'args': [[0.021017, 50, 4, scalingFactor, initialRates]]})

# initial guesses for Libor forward rates up to 50 years
# number of rates: 50 * 4 - 1 (years to maturity * floating leg payments per year - first given cash spot rate)
initialGuesses = np.full(199, 0.006)
model = opt.minimize(ObjectiveFunction, initialGuesses, args = ([initialRates, scalingFactor]), method = 'SLSQP', options = {'maxiter': 500}, constraints = swaps)

# print selected model results
print('Success: ' + str(model.success))
print('Message: ' + str(model.message))
print('Number of iterations: ' + str(model.nit))
print('Objective function (sum of squared errors): ' + str(model.fun))
#print('Changing variables (Libor forward rates): ' + str(model.x * 100))
pl.plot(model.x)
pl.show()

Monday, December 17, 2018

QuantLib-Python: Exposure Simulation

This Python program is using QuantLib library tools for simulating exposures for one selected Bloomberg vanilla benchmark swap transaction. Based on simulated exposures, the program will then calculate Expected Positive Exposure (EPE) and Expected Negative Exposure (ENE), as well as corresponding CVA and DVA statistics.

Then, some (unfortunate) limitations: the program can only handle one transaction at a time, so simulating exposures for netting sets having several transactions is not possible. Also, the program can simulate only one risk factor at a time, so simulating exposures for transactions exposed to more than one risk factor is not possible. However, with some careful re-designing, these properties could also be implemented by using QuantLib library tools.

The complete program can be found in my GitHub repository. Thanks for reading this blog. Merry Christmas for everyone.
-Mike


Simulated exposures























Data

A few notes on data.
  • Swap transaction is 5Y receiver vs. 3M USD Libor + spread. At inception, swap PV has been solved to be zero. Details can be found in the screenshot below.
  • Interest rate data for spot term structure (discount factors) has been retrieved from Bloomberg Swap Manager as of 12.12.2018.
  • Default term structures for the both parties (counterparty, self) are created from flat CDS term structures (100 bps), as seen on Bloomberg Swap Manager CVA tab.
  • Short rate simulations are processed by using Hull-White one-factor model, which uses parameters calibrated to a given set of flat 20% swaption volatilities, as seen on Bloomberg Swap Manager CVA tab.

Results

Bloomberg Swap Manager results: CVA = 6854 and DVA = 1557. Program results (for one run): CVA = 6727 and DVA = 1314, using weekly time steps and 1000 paths. However, "close enough" results can be achieved with considerably smaller amount of paths and less dense time grid.

Screens

Bloomberg swap transaction



Bloomberg CVA



Bloomberg DVA



Bloomberg EPE



Program EPE























Bloomberg ENE



Program ENE