Showing posts with label HW1F. Show all posts
Showing posts with label HW1F. Show all posts

Sunday, November 17, 2019

QuantLib-Python: Monte Carlo Valuation for Target Accrual Redemption Note

Out of curiosity, I wanted to create an implementation for interest rate Target Accrual Redemption Note (TARN) by using QuantLib-Python library. Now, as one might be aware, the availability of QuantLib Monte Carlo framework for Python is limited (due to templatization of the original C++ classes) to a few existing implementations. This means, that one is not able to re-implement new MC valuation scheme directly by using this specific framework. What to do? There are some workarounds. One may implement such a new scheme by using QuantLib C++ library and then create a wrapper for this library by using SWIG. Some instructions on how to proceed with the SWIG path has also been presented in here. Another way is just to give up the usual QuantLib "Instrument/Engine"-paradigm and just use all the nice pieces available. The complete program can be downloaded from my GitHub page.

TARN


This product is a path-dependent structured note, which terminates when target coupon payment will be reached. When cumulative coupon amount reaches this target amount before maturity, the holder of the note receives final payment and the contract will terminate immediately. TARN coupon payoff is usually structured to be similar to inverse floating rate note. There is (usually) also an attractive teaser coupon attached in the beginning of the structure, combined with the possibility of getting back the par value relatively fast. If the future index will stay low or go even lower, target coupon will be reached early and the investor will enjoy the benefits of high returns for this short-lived investment. However, in the worst case, if the future index path will go sky high, investor will be "bleeding to death" with this long-dated investment yielding inferior returns.

Path Generator


For this specific implementation (for the sake of being as accurate as possible), I gave up using QuantLib Path Generator, because it will create stochastic paths only for even set of points in time. Below is a simple method, which generates paths for a given set of QuantLib dates, which can be unevenly distributed.

# path generator for a given 1d stochastic process and 
# a given set of QuantLib dates, which can be unevenly distributed
# uses process evolve method, which returns asset value after time interval Δt
# returns E(x0,t0,Δt) + S(x0,t0,Δt) ⋅ Δw, where E is expectation and S standard deviation
# input arguments:
#   x0 = asset value at inception
#   dates = array of dates
#   dayCounter = QuantLib day counter
#   process = QuantLib 1D stochastic process implementation
#   nPaths = number of paths to be simulated
def PathGenerator(x0, dates, dayCounter, process, nPaths):
    t = np.array([dayCounter.yearFraction(dates[0], d) for d in dates])    
    urg = ql.UniformRandomGenerator()
    ursg = ql.UniformRandomSequenceGenerator(t.shape[0] - 1, urg)
    grsg = ql.GaussianRandomSequenceGenerator(ursg)    
    paths = np.zeros(shape = (nPaths, t.shape[0]))
    
    for j in range(nPaths):
        dw = np.array(list(grsg.nextSequence().value()))
        x = x0
        path = []

        for i in range(1, t.shape[0]):
            x = process.evolve(t[i-1], x, (t[i] - t[i-1]), dw[i-1])
            path.append(x)
            
        path = np.hstack([np.array([x0]), np.array(path)])
        paths[j,:] = path
        
    # return array dimensions: [number of paths, number of items in t array]
    return paths

Main Program


First, we create the usual set of required QuantLib parameters, such as valuation date. For the sake of being able to value this product also after its inception, valuation method takes QuantLib Index object as one argument. All past fixing dates and rates can be stored into this index. Now, if one changes valuation date (currently 4.3.2018) to any possible date after inception and before transaction maturity, the product will be valued accordingly. It should be noted, that in such scheme it is crucial to provide all possible past fixings. Failure to do this, will lead to an exception.

For creating valuation curve (curveHandle) and short rate process (HW1F), I have decided to take the shortest possible way and just created flat forward curve and assumed constants for short rate process parameters (mean reversion and short rate volatility), instead calibrating this model into the actual market data. For any "real-life" purposes, one may take a look at some possible proper implementations given in here for valuation curve and in here for model calibration.

Next, all transaction parameters are being created one by one. Coupon dates are being created by using QuantLib Schedule object. Finally, PV and the average termination time point is requested from TARN method and printed.

import QuantLib as ql
import numpy as np

# define general QuantLib-related parameters
valuationDate = ql.Date(4,3,2018)
calendar = ql.TARGET()
convention = ql.ModifiedFollowing
dayCounter = ql.Actual360()
ql.Settings.instance().evaluationDate = valuationDate

# set index object and past fixings
pastFixingsDates = np.array([ql.Date(4,3,2019), ql.Date(4,3,2020)])
pastFixingsRates = np.array([0.05, 0.05])
index = ql.USDLibor(ql.Period(12, ql.Months))
index.clearFixings()
index.addFixings(pastFixingsDates, pastFixingsRates)

# create discounting curve and process for short rate
r0 = 0.015
curveHandle = ql.YieldTermStructureHandle(ql.FlatForward(valuationDate, r0, dayCounter))
a = 0.05
vol = 0.009
HW1F = ql.HullWhiteProcess(curveHandle, a, vol)

# define bond-related parameters
startDate = ql.Date(4,3,2018)
firstCouponDate = calendar.advance(startDate, ql.Period(1, ql.Years))
lastCouponDate = calendar.advance(startDate, ql.Period(10, ql.Years))
couponDates = np.array(list(ql.Schedule(firstCouponDate, lastCouponDate, ql.Period(ql.Annual), 
    calendar, ql.ModifiedFollowing, ql.ModifiedFollowing, ql.DateGeneration.Forward, False)))
teaserCoupon = np.array([0.1])
targetCoupon = 0.25
hasToReachTarget = True
cap = 0.15
floor = 0.0
fixedRate = 0.1
factor = 3.0
structuredCouponPayoff = lambda r: max(fixedRate - factor * r, 0.0)
notional = 1000000.0

# define monte carlo-related parameters
nPaths = 10000

# request result (PV and average termination point)
result = TARN(startDate, valuationDate, couponDates, targetCoupon, teaserCoupon,
    cap, floor, hasToReachTarget, structuredCouponPayoff, notional, dayCounter,
    nPaths, HW1F, curveHandle, index)

print('pv', '{0:.0f}'.format(result[0]))
print('termination', '{0:.1f}'.format(result[1]))

Valuation Method


The last piece of code shows the actual pricing method. Paths will be generated for all remaining coupon dates. Past fixing rates will be used for all such coupon dates, which happened in the past.

All simulated paths will then be processed in the loop. Thanks to amazing variety of different types of array operations in Numpy Array library, we can process a path without deploying any typical "inner loop" (for path steps). The loop basically starts with a given set of simulated rates and along the way, transforms this path into an array of cash flow present values. Operations are commented in the code.

def TARN(bondStartDate, valuationDate, couponDates, targetCoupon, teaserCoupon, cap, floor,
    hasToReachTarget, payoff, notional, dayCounter, nPaths, process, curve, index):    
    
    # immediate exit trigger for matured transaction
    if(valuationDate >= couponDates[-1]):
        return (0.0, 0.0)
    
    # create date array for path generator
    # combine valuation date and all remaining coupon dates
    dates = np.hstack((np.array([valuationDate]), couponDates[couponDates > valuationDate]))
    
    # generate paths for a given set of dates, exclude the current spot rate
    paths = PathGenerator(process.x0(), dates, dayCounter, process, nPaths)[:,1:]
    
    # identify past coupon dates
    pastDates = couponDates[couponDates <= valuationDate]
    # conditionally, merge given past fixings and generated paths
    if(pastDates.shape[0] > 0):
        pastFixings = np.array([index.fixing(pastDate) for pastDate in pastDates])    
        pastFixings = np.tile(pastFixings, (paths.shape[0], 1))
        paths = np.hstack((pastFixings, paths))
        
    # define time grid for all coupon dates, calculate day count fractions
    t = np.array([0.0] + [dayCounter.yearFraction(bondStartDate, d) for d in couponDates])
    dcf = np.diff(t)
    
    # result accumulators
    global_pv = []
    termination = []

    # calculate PV for all paths
    for path in paths:
        # transform simulated path into structured rates using payoff function
        path = (np.vectorize(payoff))(path)
        index_1 = np.where(teaserCoupon > 0.0)
        # replace some path rates with teaser coupons (if exists)
        path[index_1] = teaserCoupon
        # calculate capped and floored structured coupon for non-teaser rates
        path = np.concatenate([path[index_1], np.minimum(path[index_1[0].shape[0]:], cap)])
        path = np.concatenate([path[index_1], np.maximum(path[index_1[0].shape[0]:], floor)])
        # multiply rates with day count fractions
        path = np.multiply(path, dcf)
        # take into account only rates, for which cumulative sum is less or equal to target coupon
        index_2 = np.where(np.cumsum(path) <= targetCoupon)
        path = path[index_2]
        dates = couponDates[index_2]
        # path termination time is the date, which reaches target coupon
        termination.append(dayCounter.yearFraction(valuationDate, dates[-1]))
        # if coupon has to reach target coupon, add remaining coupon available into final coupon
        if(hasToReachTarget): path[-1] = targetCoupon - np.sum(path[:-1])
        # multiply coupon rates with notionals, add final redemption
        path *= notional
        path[-1] += notional
        # take into account only coupons, for which coupon dates are in the future
        index_3 = np.where(dates >= valuationDate)
        dates = dates[index_3]
        path = path[index_3]
        # request discount factors for all coupon dates
        df = np.array([curve.discount(d) for d in dates])
        # calculate coupon PV's
        path = np.multiply(path, df)
        # add path PV into result accumulator
        global_pv.append(np.sum(path))

    # return tuple (pv, average termination time)
    return (np.mean(np.array(global_pv)), np.mean(np.array(termination)))

A couple of final notes. The presented example is valuing a product in which the coupon is exposed to interest rates. However, a notable amount of TARN products issued (in the past), have been exposing their coupons to movements in FX rates. Since any one-dimensional process can be used for path generation purposes in the valuation method, there is a possibility to use some of the existing QuantLib processes also for modeling the path of future FX rates. Also, structured coupon payoff can be defined outside the valuation method. Since all types of TARN products are (usually) pretty much sharing the same other properties (ex. path-dependency, teaser rates), the valuation method is relatively "generic" after all.

Thanks for reading this blog.
-Mike

Wednesday, December 26, 2018

QuantLib-Python: Multiprocessing Method Wrapper

In this post, I published a program for simulating term structure up to 30 years with daily time step, using Hull-White one-factor model. The resulting curve was able to replicate the given initial yield curve without any notable differences. The only serious issue here was the cost of running the program in time units.

In order to improve this specific issue, I got familiar with some multi-threading possibilities in Python. Now, there are some deep issues related to the way Python threads are actually implemented and especially the issues with thread locking known as GIL. Related stuff has been completely chewed in here. In order to avoid facing GIL-related issues, another way is to use Python multiprocessing, which allows the programmer to fully leverage multiple processors on a given machine. Moreover, separate processes are completely separate and one process cannot affect another's variables.

Comparison statistics


Again, I simulated term structure up to 30 years with daily time step, using 10000 paths. I did some quick profiling, in order to find the bottlenecks in the original program (sequential). By looking the column task share, we can see, that there are two tasks, which are consuming the most part of the complete running time: path generations (~54%) and path integrations (~46%). After this, I isolated these two parts and processed these by using multiprocessing scheme.











By using multiprocessing (two configured processes), I managed to decrease the complete running time from 163 seconds to 107 seconds. In general, for all those parts which were enjoying the benefits of multiprocessing, improvement ratio (multiprocessing per sequential) is around 0.65.

CPU puzzle


In order to understand the reason for this improvement, let us take a look at processor architecture in my laptop. First, let us check the "Grand Promise" made by System Monitor.













Based on this view, I actually expected to have four CPU for processing. I was then really surprised, that adding third and fourth process was not decreasing running time any further, than having just two processes. After some usual Stackoverflow gymnastics, I finally got the definition to calculate the real number of CPU available in my laptop.












CPU available is "Core(s) per socket * Socket(s)", which is 2 * 1 in my laptop. So, all in all I have only two CPU available for processing, not four as was shown in that System Monitor. This means, that having more than two CPU available in a laptop, one should expect even better improvement ratio than reported in my statistics here.

Wrapper method


In order to avoid code duplication and to come up with something a bit more generic, I started to dream about the possibility to create a mechanism for applying multiprocessing for any given method, if so desired. Such solution is possible by using Python lambda methods.

# method for executing given lambdas in parallel
def MultiprocessingWrapper(targetFunctionList):
    processList = []
    aggregatedResults = []
    queue = mp.Manager().Queue()

    # execute lambda from a given list based on a given index
    # storing results into queue
    def Worker(index):
        result = targetFunctionList[index]()
        queue.put(result)
    
    # start processes, call worker method with index number
    for i in range(len(targetFunctionList)):
        process = mp.Process(target = Worker, args = (i,))
        processList.append(process)
        process.start()
    
    # join processes, extract queue into results list
    for process in processList:
        process.join()
        aggregatedResults.append(queue.get())
        
    # return list of results for a client
    return aggregatedResults

Previous method is receiving a list of lambda methods as its argument. The idea is, that there would be always one lambda method for each process to be started. Why? We might face a situation, in which different set of parameters would be required for each lambda (say, all lambdas would be using the same uniform random generator, but with a different values for seeding the generator). In wrapper method, process is then created to start each configured lambda method. Results calculated by given lambda will be stored into queue. Finally, all results will be imported from queue into result list and returned for a client.

As concrete example, the following program segment is first creating two lambda methods for starting GeneratePaths method, but with different seed value for both processes. After this, wrapper method is then fed with the list of lambdas and processed paths will be received into list of results.

    # task of generating paths is highly time critical
    # use multiprocessing for path generation
    # create lambdas for multiprocessing wrapper
    # target signature: def GeneratePaths(seed, process, timeGrid, n)
    nPaths = 10000
    nProcesses = 2
    seeds = [1834, 66023]
    nPathsPerProcess = int(nPaths / nProcesses)
    target_1 = lambda:GeneratePaths(seeds[0], HW1F, grid.GetTimeGrid(), nPathsPerProcess)
    target_2 = lambda:GeneratePaths(seeds[1], HW1F, grid.GetTimeGrid(), nPathsPerProcess)
    targetFunctionList = [target_1, target_2]
    results = MultiprocessingWrapper(targetFunctionList)

The complete program can be found in my GitHub repository. Finally, a couple of discussion threads, which may be useful in order to understand some QuantLib-related issues, are given in here and here. Thanks for reading my blog and again, Merry Christmas for everyone.
-Mike
  

Monday, December 17, 2018

QuantLib-Python: Exposure Simulation

This Python program is using QuantLib library tools for simulating exposures for one selected Bloomberg vanilla benchmark swap transaction. Based on simulated exposures, the program will then calculate Expected Positive Exposure (EPE) and Expected Negative Exposure (ENE), as well as corresponding CVA and DVA statistics.

Then, some (unfortunate) limitations: the program can only handle one transaction at a time, so simulating exposures for netting sets having several transactions is not possible. Also, the program can simulate only one risk factor at a time, so simulating exposures for transactions exposed to more than one risk factor is not possible. However, with some careful re-designing, these properties could also be implemented by using QuantLib library tools.

The complete program can be found in my GitHub repository. Thanks for reading this blog. Merry Christmas for everyone.
-Mike


Simulated exposures























Data

A few notes on data.
  • Swap transaction is 5Y receiver vs. 3M USD Libor + spread. At inception, swap PV has been solved to be zero. Details can be found in the screenshot below.
  • Interest rate data for spot term structure (discount factors) has been retrieved from Bloomberg Swap Manager as of 12.12.2018.
  • Default term structures for the both parties (counterparty, self) are created from flat CDS term structures (100 bps), as seen on Bloomberg Swap Manager CVA tab.
  • Short rate simulations are processed by using Hull-White one-factor model, which uses parameters calibrated to a given set of flat 20% swaption volatilities, as seen on Bloomberg Swap Manager CVA tab.

Results

Bloomberg Swap Manager results: CVA = 6854 and DVA = 1557. Program results (for one run): CVA = 6727 and DVA = 1314, using weekly time steps and 1000 paths. However, "close enough" results can be achieved with considerably smaller amount of paths and less dense time grid.

Screens

Bloomberg swap transaction



Bloomberg CVA



Bloomberg DVA



Bloomberg EPE



Program EPE























Bloomberg ENE



Program ENE