arsalandywriter.com

Taylor Series Expansions: A Guide to Function Approximation

Written on

Understanding Taylor Series Expansions

Navigating through the complexities of functions can be challenging, particularly when they feature multiple variables or intricate behavior. A useful method to simplify the analysis is through polynomial approximation, which is exactly where Taylor Series Expansions come into play.

A Taylor series expansion approximates a function as a polynomial centered around a specific point. The concept is straightforward: it transforms the function into an infinite sum of terms. Each term is comprised of a power of (x minus a fixed point) multiplied by a coefficient, which is determined by the function's derivatives at that point.

Mathematically, the Taylor series expansion for a function ( f(x) ) around the point ( a ) is expressed as follows:

Where ( f'(a) ), ( f''(a) ), and ( f'''(a) ) represent the first, second, and third derivatives of ( f(x) ) evaluated at ( a ). The factorial terms in the denominator ensure that the coefficients are appropriately scaled.

The general formula for Taylor series expansions indicates that the function's value at any point ( x ) can be approximated by taking the function's value at ( a ) and adding terms that involve powers of ( (x - a) ) multiplied by the derivatives evaluated at ( a ). The accuracy of this approximation improves as more terms are included.

We can perform Taylor series expansions as long as the function is differentiable; however, our primary focus here will be on first-order and second-order expansions, which require only two derivatives.

First-Order Taylor Expansion: Linear Approximation

Univariate Functions

The first-order Taylor series expansion for a function ( f(x) ) at the point ( x = a ) is defined as:

Multivariate Functions

In an n-dimensional space, the gradient of a function consists of its partial derivatives with respect to each variable. The first-order Taylor series expansion in this context is referred to as the linear approximation or tangent plane. The first-order Taylor series expansion for a function ( f(x_1, x_2, ldots, x_n) ) at point ( a = (a_1, a_2, ldots, a_n) ) can be expressed as:

Second-Order Taylor Expansion: Quadratic Approximation

Univariate Functions

The second-order Taylor series expansion offers a more precise approximation by incorporating the second-order derivative. For a function ( f(x) ), the second-order Taylor series expansion is given by:

Multivariate Functions

In a similar manner, the second-order Taylor series expansion in n-dimensional space can be represented as:

Application of Taylor Series Expansions in Minimization

Univariate Functions

Taylor series expansions can facilitate the process of locating a minimum by approximating the function as a quadratic function using its second-order Taylor expansion.

Consider the aim of finding the minimum of ( f(x) ) near the point ( x = a ). The second-order Taylor series expansion of ( f(x) ) around ( x = a ) can be expressed as:

To discover the minimum of this approximation, we differentiate it with respect to ( x ) and set the result to zero:

Upon solving for ( x ), we arrive at:

This provides an estimate of the local minimum of ( f(x) ) near ( x = a ), which can then serve as a starting point for iterative numerical methods to enhance the minimum estimate.

It is crucial to emphasize that this approximation holds only if the second derivative of ( f(x) ) evaluated at ( x = a ) is positive, indicating a minimum.

Multivariate Functions

A similar strategy can be applied to minimize a multivariate function using its second-order Taylor series expansion. If we have a multivariate function ( f(x_1, x_2, ldots, x_n) ) that we wish to minimize near point ( a = (a_1, a_2, ldots, a_n) ), the second-order expansion around point ( a ) is given by:

To determine the minimum of this approximation, we can compute its gradient with respect to ( x = (x_1, x_2, ldots, x_n) ) and set it to zero:

Solving for ( x ) yields:

Here, ( Hf(a) ) denotes the Hessian matrix of ( f(x) ) evaluated at ( x = a ). This symmetric matrix consists of the second partial derivatives of ( f(x) ) with respect to ( x_i ) and ( x_j ). The inverse of ( Hf(a) ) indicates the curvature of the function near point ( a ). This also serves as an estimate of the minimum location, which must be further optimized through iterative numerical methods.

Again, it is vital to note that this approximation is valid only if ( Hf(a) ) is positive definite.

Practical Examples Using Python Code

Univariate Function Example

import numpy as np

import sympy as sp

# Define the function f(x)

x = sp.Symbol('x')

f = x**3 - 2*x**2 + x + 1

# Calculate the first and second derivatives of f(x)

df = sp.diff(f, x)

d2f = sp.diff(df, x)

# Evaluate the derivatives at x = 1.5

a = 1.5

fa = f.subs(x, a).evalf()

dfa = df.subs(x, a).evalf()

print('First order derivative: ', df, 'Evaluation at a: ', dfa)

d2fa = d2f.subs(x, a).evalf()

print('Second order derivative: ', d2f, 'Evaluation at a: ', d2fa)

# Calculate the second-order Taylor approximation of f(x) at x = 1.5

taylor_approx = fa + dfa * (x-a) + 0.5 * d2fa * (x-a)**2

print('Taylor Series expansion: ', taylor_approx)

# Calculate the derivative of the approximation and solve for x

taylor_approx_deriv = sp.diff(taylor_approx, x)

print('Taylor Series expansion derivative: ', taylor_approx_deriv)

taylor_minima = sp.solve(taylor_approx_deriv, x) # Equal the derivative to zero

print('Taylor_minima: ', taylor_minima)

# Check if the point x = 0.5 is a local minimum of f(x)

for x_min in taylor_minima:

if d2f.subs(x, x_min).evalf() > 0:

print(f"x = {x_min} is a local minimum of f(x)")

else:

print(f"x = {x_min} is not a local minimum of f(x)")

Multivariate Function Example

import numpy as np

import sympy as sp

# Define the function f(x, y)

x, y = sp.symbols('x y')

f = x**2 + y**2 + x*y + x + y

# Calculate the gradient and Hessian of f(x, y)

dfdx = sp.diff(f, x)

dfdy = sp.diff(f, y)

d2fdxdx = sp.diff(dfdx, x)

d2fdydy = sp.diff(dfdy, y)

d2fdxdy = sp.diff(dfdx, y)

# Define the point (a, b) = (1, -1)

a, b = sp.symbols('a b')

a_val, b_val = 1, -1

# Evaluate the gradient and Hessian at (a, b) = (1, -1)

grad = np.array([dfdx, dfdy])

grad_val = np.array([grad[0].subs([(x, a_val), (y, b_val)]), grad[1].subs([(x, a_val), (y, b_val)])])

hess = np.array([[d2fdxdx, d2fdxdy], [d2fdxdy, d2fdydy]])

hess_val = np.array([[hess[0,0].subs([(x, a_val), (y, b_val)]), hess[0,1].subs([(x, a_val), (y, b_val)])],

[hess[1,0].subs([(x, a_val), (y, b_val)]), hess[1,1].subs([(x, a_val), (y, b_val)])]])

# Define the second-order Taylor series expansion of f(x, y) around (a, b)

p = np.array([x-a_val, y-b_val])

taylor = f.subs([(x, a_val), (y, b_val)]) + grad_val.dot(p) + 0.5 * p.dot(hess_val).dot(p)

# Find the critical points by solving the system of equations grad(f) = 0

critical_points = sp.solve([dfdx, dfdy], [x, y])

# Print the results

print("The second-order Taylor series expansion of f(x, y) around (a, b) = (1, -1) is:")

print(taylor)

print("nThe critical points of f(x, y) are:")

print(critical_points)

Conclusion

Thank you for engaging with this content! If you have any suggestions for additional topics, feel free to share. Don’t forget to subscribe for updates on future publications.

If you found this article insightful, consider following my work for more updates. Alternatively, if you wish to delve deeper into the subject, check out my book "Data-Driven Decisions: A Practical Introduction to Machine Learning." It’s an affordable resource that can significantly benefit your understanding of machine learning!

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Exploring the Metaverse: The Future of Virtual Interaction

An insightful look into the Metaverse, its potential, and how it may redefine our online experiences.

Exploring the Art of Breathwork: A Guide to Natural Healing

Discover the transformative power of breathwork and pranayama for personal healing and improved well-being.

# Enhancing Your Weaknesses: A Path to Self-Improvement

Exploring the importance of addressing weaknesses for personal growth and development.

The Future of Law Enforcement: 2024 Chevy Blazer EV PPV Insights

An in-depth look at the upcoming Chevy Blazer EV PPV, designed specifically for police use and its impact on the future of law enforcement vehicles.

The Great Debate: Weightlifting vs. Aerobic Exercise Explained

Explore the findings on weightlifting and aerobic exercises and their impact on health and longevity.

The Toxic Legacy of the Cold War: Climate Change’s Unforgiving Grip

Climate change is exposing the hazardous consequences of Cold War nuclear testing, particularly in the Marshall Islands, raising severe environmental and health concerns.

How to Cultivate a Passion for Writing Without Losing Yourself

Discover ways to maintain your love for writing while avoiding burnout and stagnation.

Understanding the Disconnect: Science and Conservative Beliefs

Exploring the divide between scientific consensus and conservative beliefs, emphasizing the impact of misinformation.