Understanding Stochastic Integration: A Comprehensive Guide
Written on
Stochastic Integration: An Overview
This article serves as a gentle introduction to stochastic integration with Python, building on the concepts discussed in my previous piece about stochastic differentiation (read it [here](https://oscarnieves100.medium.com/stochastic-differentiation-5480d33ac8b8)). We will explore key aspects of stochastic integrals and their applications, particularly those involving the Wiener process, denoted as W(t).
Properties of Stochastic Integrals
To begin, let's consider a general stochastic integral relative to a Wiener process:
In this equation, X(s) represents a stochastic process, and integration occurs from time 0 to time t. While integrals can also be defined for other processes, such as Poisson processes, this discussion will primarily focus on Wiener process integrals for simplicity.
It's important to remember that the Wiener process W(t) behaves as the square of time t, which means that dW(s) scales with the square of the infinitesimal time increment ds. This characteristic implies that traditional integration methods cannot be applied here, as these increments do not comply with standard calculus rules. Nonetheless, stochastic integrals maintain a linearity property similar to that of conventional integrals, leading to the following key properties:
One notable feature is the Itô isometry property, which allows the expected value of the square of a stochastic integral to be calculated by integrating the expected value of the squared process X²(t):
Similarly, for the product of two statistically independent integrals:
Another significant quantity is the RMS integral:
When the integrand is deterministic, the expected value of the integral is zero:
This occurs because each dW(s) increment has a mean of zero, resulting in the integral being an infinite sum of increments with zero mean.
In terms of differentials, the following properties apply:
Numerical Evaluation of Stochastic Integrals
Stochastic integrals can be approximated as the limit of a Riemann sum, similar to ordinary integrals. Consequently, we can estimate the stochastic integral using discrete time steps of size ?t via the Euler-Mayurama scheme:
Here,
Higher order schemes exist, but their convergence behaviors differ from those of deterministic integrals due to the randomness inherent in the integral, particularly when calculating statistical moments. For this article, we will concentrate on this straightforward approach.
Example 1:
To illustrate, consider the integral:
As the integrand is deterministic, the expected value is zero:
Using the isometry property, we find:
Thus, the variance is:
Next, we will simulate this in Python by setting an arbitrary time t = 3:
import numpy as np
# Input parameters N = 101 t = 3 MC = 10000 tvector = np.linspace(0, t, N) dt = tvector[1] - tvector[0]
# Analytic results E = 0 Var = 9/5 * t**5
# Numeric results N_array = np.random.normal(loc=0, scale=1, size=[MC, N]) Ones = np.ones([MC, 1]) integrand = 3 * tvector**2 func = np.kron(integrand, Ones) I = np.sqrt(dt) * np.sum(func * N_array, axis=1) E_num = np.mean(I, axis=0) Var_num = np.mean(I**2, axis=0) - E_num**2
# Compare results print("Analytic values:") print("E[I] =", E) print("Var[I] =", Var) print("--------------") print("Numeric values:") print("E[I] =", E_num) print("Var[I] =", Var_num)
Running this code multiple times yields outputs similar to the following:
Analytic values: E[I] = 0 Var[I] = 437.40000000000003 ----------------------------------------------- Numeric values: E[I] = 0.14522140709630404 Var[I] = 442.5864059756218
As expected, the numeric values of E[I] tend to fluctuate around zero, while the values of Var[I] oscillate around the precise value of 437. The degree of these fluctuations is influenced by the number of Monte Carlo simulations (MC) and the number of time steps (N) taken, illustrating that the convergence of stochastic integrals is less favorable than that of ordinary integrals.
Example 2:
Next, we analyze the integral:
In this instance, the integrand is not deterministic, making it impossible to compute the mean value analytically. However, we can apply Itô's isometry to determine the expected value of I² and subsequently compute the integral's RMS value as follows:
Using the property:
We obtain:
Ultimately, this leads to:
We will validate this in Python using t = 3:
import numpy as np
# Input parameters N = 101 t = 3 MC = 10000 tvector = np.linspace(0, t, N) dt = tvector[1] - tvector[0]
# Analytic results RMS = 1/2 * t**2
# Numeric results N_array = np.random.normal(loc=0, scale=1, size=[MC, N]) Ones = np.ones([MC, 1]) W = np.cumsum(np.sqrt(dt) * np.random.normal(loc=0, scale=1, size=[MC, N]), axis=1) integrand = np.kron(tvector, Ones) * W I = np.sqrt(dt) * np.sum(integrand * N_array, axis=1) RMS_num = np.sqrt(np.mean(I**2, axis=0))
# Compare results print("RMS analytic =", RMS) print("RMS numeric =", RMS_num)
When executing this program multiple times, we receive results like:
RMS analytic = 4.5 RMS numeric = 4.684039937790411
While the RMS numeric values display some variation, they consistently hover around 4.5.
Example 3:
Now, we will determine the RMS value of:
Utilizing Itô's isometry, we obtain:
We recognize that the integrand corresponds to the moment-generating function of W(s), for which we already established the answer in my prior article [here](https://oscarnieves100.medium.com/the-building-blocks-of-stochastic-calculus-part-i-d06c87916070):
Thus, we arrive at:
The Python code for this example is as follows:
import numpy as np
# Input parameters N = 101 beta = 0.55 t = 3 MC = 10000 tvector = np.linspace(0, t, N) dt = tvector[1] - tvector[0]
# Analytic results RMS = 1/beta/np.sqrt(2) * np.sqrt(np.exp(2 * beta**2 * t) - 1)
# Numeric results N_array = np.random.normal(loc=0, scale=1, size=[MC, N]) Ones = np.ones([MC, 1]) W = np.cumsum(np.sqrt(dt) * np.random.normal(loc=0, scale=1, size=[MC, N]), axis=1) integrand = np.exp(beta * W) I = np.sqrt(dt) * np.sum(integrand * N_array, axis=1) RMS_num = np.sqrt(np.mean(I**2, axis=0))
# Compare results print("RMS analytic =", RMS) print("RMS numeric =", RMS_num)
The outputs from this code will resemble:
RMS analytic = 2.9150723103019516 RMS numeric = 2.8871054741989113
Example 4:
Next, we will find the RMS value of:
Using Itô's isometry, we have:
The squared cosine can be simplified through a trigonometric identity:
This leads us to:
Next, we can apply Euler’s formula to express the cosine function as a sum of exponentials and utilize the moment-generating function of W(s) to arrive at:
Ultimately, this leads to:
The corresponding Python code is as follows:
import numpy as np
# Input parameters N = 101 t = 5 MC = 10000 tvector = np.linspace(0, t, N) dt = tvector[1] - tvector[0]
# Analytic results RMS = np.sqrt(0.5 * t + 0.25 * (1 - np.exp(-2 * t)))
# Numeric results N_array = np.random.normal(loc=0, scale=1, size=[MC, N]) Ones = np.ones([MC, 1]) W = np.cumsum(np.sqrt(dt) * np.random.normal(loc=0, scale=1, size=[MC, N]), axis=1) integrand = np.cos(W) I = np.sqrt(dt) * np.sum(integrand * N_array, axis=1) RMS_num = np.sqrt(np.mean(I**2, axis=0))
# Compare results print("RMS analytic =", RMS) print("RMS numeric =", RMS_num)
The output from this code will appear as follows:
RMS analytic = 1.6583089730257023 RMS numeric = 1.6678332264282443
Application: The Ornstein-Uhlenbeck Process
The Ornstein-Uhlenbeck process (OU process) is a widely utilized stochastic process in both finance and physics, characterizing systems that approach a steady state over time. It is defined by the stochastic differential equation:
Here, ? is a positive parameter, ? represents the drift parameter, and ? indicates the volatility parameter. This SDE can be solved to yield:
We can confirm this solution by differentiating X(t) according to Itô's formula. By leveraging the properties of stochastic integrals we have just discussed, we can easily derive the expected value and variance of the process:
Consequently, it turns out that X(t) is normally distributed.