Synthesizing IRFs from mass, stiffness and damping

In this tutorial we will introduce a few different methods that can be used to compute the impulse response function (IRF) of a numerical system when the mass, stiffness and damping matrices are known. Specifically, we will look at modal synthesis (for proportionally damped systems only), numerical integration, and inverse Fourier transform methods.

A little bit of theory

So, what is an impulse response function (IRF) and why might we want it? As the name suggests, the IRF of a system is its response when excited by a (unit) impulse, i.e. an external force of very short duration. More specifically, the unit impulse response of a system is the response obtained by considering the external forcing \(F(t)\) to be that of a Dirac delta function, defined mathematically as \[ \begin{align} \delta(t-a) &= 0 \quad \quad \mbox{for } t\neq a\\ \int_{-\infty}^\infty \delta(t-a){\rm d}t &=1 \end{align} \tag{1}\] For all intents and purposes we can think of this \(\delta\) as a very sharp and sudden kick which disappears immediately after impact. The response of a system (say displacement, \({x}(t)\)) due to such excitation is labelled \({g}(t)\) and termed the IRF.

Now, why might we want the IRF? It can be shown (see our previous discussion of forced single DoF systems) that, for a linear and time invariant system, the response due to an arbitrary time varying force \(F(t)\) can be expressed in the form of a convolution integral involving the IRF, \[ x(t) = \int_0^t F(\tau) g(t-\tau) d\tau \tag{2}\] In vibration analysis, Equation 2 is also called Duhamel’s integral; it states that the response of a system at time \(t\) is simply the super-position of all previously applied forces weighted appropriately by the IRF. Key to its application is knowing the IRF of the system.

In this tutorial we will consider general multi-DoF system that are described by an equation of motion of the form, \[ \mathbf{M}\ddot{\mathbf{x}}(t) + \mathbf{C}\dot{\mathbf{x}}(t) + \mathbf{K}\mathbf{x}(t) = \mathbf{f}(t). \tag{3}\] where we will seek the solution \(\mathbf{x}(t)\rightarrow \mathbf{g}(t)\) for the case whereby \(\mathbf{f}(t) \rightarrow \delta_j(0)\), where \(\delta_j(0)\) is a delta function applied to the \(j\)th DoF only.

To illustrate the various methods for computing \(\mathbf{g}(t)\), we will use the same numerical mass-spring system as in our previous Synthesising FRFs tutorial, as illustrated in Figure 1.

Figure 1: Numerical mass-spring assembly used for FRF synthesis example.

We begin by re-defining the mass and stiffness matrices \(\mathbf{M}\) and \(\mathbf{K}\) for the system in Figure 1 and computing the system FRFs for inspection purposes.

Code
import numpy as np
np.set_printoptions(linewidth=250)
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
import scipy as sp
import sys as sys

f = np.linspace(0,100,1000) # Generate frequency array
ω = 2*np.pi*f # Convert to radians
# Define component masses and build matrices
m1, m2, m3, m4, m5 = 1, 2, 2, 1, 0.5
m6, m7, m8, m9, m10, m11 = 0.5, 2, 1, 3, 5,0.5
M = np.diag(np.array([m1,m2,m3+m6,m4+m7,m5+m8,m9,m10,m11])) # Coupled AB

# Define component stiffnesses and build matrices (complex stiffness -> structural damping)
η = 0.05 # Structural loss factor
k1, k2, k3, k4, k5 = 1, 1.5, 0.7, 1, 0.5
k6, k7, k8, k9, k10, k11 = 0.5, 2, 1, 0.2, 1, 0.5
K = np.array([[k1+k2+k5, -k2,        0,     0,    -k5,     0,     0,          0],
                [-k2,       k2+k3+k4, -k3,   -k4,    0,      0,     0,          0],
                [0,        -k3,        k3+k6, 0,     0,     -k6,    0,          0],
                [0,        -k4,        0,     k4+k7, 0,      0,    -k7,         0],
                [-k5,       0,         0,     0,     k5+k8,  0,    -k8,         0],
                [0,         0,        -k6,    0,     0,      k6+k9, 0,         -k9],
                [0,         0,         0,    -k7,   -k8,     0,     k7+k8+k10, -k10],
                [0,         0,         0,     0,     0,     -k9,   -k10,        k9+k10+k11]])*1e5

C_prop = 0.0001*K + 1*M # Proportional damping matrix

# Direct solution:
Y_C_dir_prop = np.linalg.inv(K +1j* np.einsum('i,jk->ijk',ω, C_prop) - np.einsum('i,jk->ijk'**2, M))

fig, ax = plt.subplots()
ax.semilogy(f, np.abs(Y_C_dir_prop[:,0,-1]),label='Direct')
ax.grid(True)
ax.set_xlabel('Frequency (Hz)')
ax.set_ylabel('Admittance (m/N)')
# plt.legend()
plt.show() 
Figure 2: Frequency Response Function (FRF) of the numerical mass-spring system considered in this tutorial.

Numerical integration

Whilst the modal synthesis approach is straightforward and easy to implement, it is quite limited. First of all, it requires proportional damping model to ensure the underlying EoM can be diagonalised. Second, although we are mostly interested in linear structures, it would be nice to have the ability to deal with non-linearities if they arise; this is generally not possible, or at least very difficult, with the modal formulation. As such, we would like a more general method for computing the system impulse response encompassing fewer assumptions. This is achieved by working directly with the EoM in the physical domain and using numerical integration techniques.

Now, numerical integration is a massive topic with loads of different techniques available. In this tutorial we will limit ourself to two particular approaches. The first is a multi-step second order integration method called using the Newmark-\(\beta\) method. The second is a first order integration method that we implement using established algorithms/functions. Both methods can be applied to systems with non-proportional damping and, with some minor modifications, to non-linear systems also.

Broadly speaking, we can catagorise an integration scheme by the type of equation they are applied to. Direct methods are applied directly to the governing ODE, for example our second order EoM in Equation 15. The Newmark-\(\beta\) method described below is an example of a direct method. There also exist a suite of first-order methods that are applicable to first-order ODEs, this includes the popular family of Runge-Kutta methods, among many others.

A further categorisation of integration methods is whether they are explicit or implicit. In short, explicit methods solve for the response at a future time step using the solution from the current and previous time steps only. Implicit methods instead determine the response at the future time step by solving an equation that also depends on the response at the future time step. Explicit methods are generally more computationally efficient than implicit methods, though have stricter requirements regarding their stability.

Initial conditions for IRF computation

The numerical integration schemes described below take as an input the arbitrary discretized forcing signal \(\mathbf{f}_n\) and output the resulting system response. In order to obtain the system Impulse Response Function (IRF), it is necessary to force the system in such a way that approximates the ideal Dirac function described above. This can be done in a few different ways as detailed below.

Application of an initial force at \(n=0\)

To approximate a Dirac excitation we can set the initial displacement and velocity to zero, \[ \mathbf{x}_0 = \mathbf{0} \] \[ \dot{\mathbf{x}}_0 = \mathbf{0} \] and consider a force of amplitude \(\frac{2}{\Delta t}\) applied at timestep 0, \[ \mathbf{f}_0 = \frac{2}{\Delta t} \mathbf{1}_j \] The reason for the force amplitude of \(\frac{2}{\Delta t}\) is to ensure that when integrated over one whole time step, the resulting impulse has a value of 1 (as per the definition of the Dirac, see Equation 1). By setting the initial force \(\mathbf{f}_0\) to \(\frac{2}{\Delta t}\) and all remaining forces to 0, we approximate the Dirac \(\delta(t=0)\) as a right angle triangle with base \(\Delta t\) and height \(\frac{2}{\Delta t}\); the total area under this triangle is 1.

With the initial displacement and velocity set to zero, the EoM reduced to, \[ \mathbf{M}\ddot{\mathbf{x}}_0 = \frac{2}{\Delta t} \mathbf{1}_j \] which can be solved for the initial acceleration produced by the applied force \(\mathbf{f}_0\), \[ \ddot{\mathbf{x}}_0 = \frac{2}{\Delta t} \mathbf{M}^{-1} \mathbf{1}_j \tag{6}\] Equation 6 is the initial acceleration generated by the applied force (i.e. approximation of \(\delta(0)\)), and can be used as an initial condition (instead of using \(\mathbf{f}_0\) directly) to compute the system IRF.

Application of initial force at \(n=1\)

A similar idea to the previous approach is to apply the an initial force, but instead at the second time step. In this case we set the initial displacement, velocity, and acceleration to zero, \[ \mathbf{x}_0 = \mathbf{0} \] \[ \dot{\mathbf{x}}_0 = \mathbf{0} \] \[ \ddot{\mathbf{x}}_0 = \mathbf{0} \] and define the force at \(n=1\) as, \[ \mathbf{f}_1 = \frac{1}{\Delta t}\mathbf{1}_j \] Note that the factor of two is no longer in the force amplitude. This is because our force signal is now approximating the Dirac as an equilateral triangle with base length of \(2\Delta t\); the force is 0 at \(n=0\), rises linearly up to a value of \(\frac{1}{\Delta t}\) at time \(\Delta t\), and then back down to 0 at a time \(2\Delta t\). Again, the total area under this triangle is 1.

Also, unlike the previous case, with \(\mathbf{f}_1=\frac{1}{\Delta t}\mathbf{1}_j\) our Dirac is effectively being applied on the second time step; hence the IRFs obtained must all be shifted back by 1 sample to align correctly.

The main advantage of this particular approach is that we avoid having to compute the inverse of the mass matrix \(\mathbf{M}\), which for a large numerical model might have some computationally non-negligible expense.

Applitcation of an step velocity at \(n=0\)

The final method of approximating a Dirac is be considering a step velocity at \(n=0\). To do so we first integrate our EoM over the infintesimal interval \([-\delta t, \, +\delta t]\), \[ \int_{-\delta t}^{+\delta t} \left[\mathbf{M}\ddot{\mathbf{x}}(t) + \mathbf{C}\dot{\mathbf{x}}(t) + \mathbf{K}\mathbf{x}(t) \right] {\rm d}t = \int_{-\delta t}^{+\delta t} \mathbf{f}(t) {\rm d}t \] If we consider the force to be a Dirac \(\mathbf{f}(t) = \delta(0)\) the right hand side integrates to 1 for the DoF being excited, i.e. \(\mathbf{1}_j\). Integrating each term on the left hand side, \[ \mathbf{M}\int_{-\delta t}^{+\delta t}\ddot{\mathbf{x}}(t) {\rm d}t+ {\color{gray}\mathbf{C}\int_{-\delta t}^{+\delta t}\dot{\mathbf{x}}(t) {\rm d}t+ \mathbf{K}\int_{-\delta t}^{+\delta t}\mathbf{x}(t) {\rm d}t} = \mathbf{1}_j \] whilst noting that the since our system is initially at rest (i.e. \(\mathbf{x}_0 = \mathbf{0}\)), the displacement and absement (integral of displacement) terms resulting the from the damping and stiffness integrals (in gray) must be 0, we obtain, \[ \mathbf{M}(\dot{\mathbf{x}}(+\delta t)-\dot{\mathbf{x}}(-\delta t)) = \int_{-\delta t}^{+\delta t} \mathbf{f}(t) {\rm d}t \] Again, since our system is initially at rest, \(\dot{\mathbf{x}}(-\delta t)=0\). This leaves us with the initial velocity relation, \[ \mathbf{M}\dot{\mathbf{x}}_0 = \mathbf{1}_j \rightarrow \dot{\mathbf{x}}_0 = \mathbf{M}^{-1} \mathbf{1}_j \] Note that the initial velocity \(\dot{\mathbf{x}}_0\) is not enough to completely represent the Dirac forcing; we must also include the initial acceleration that results. Noting that \(\mathbf{x}_0 = \mathbf{0}\) and that the Dirac forcing is accounted for by \(\dot{\mathbf{x}}_0\), our initial EoM reduces to, \[ \mathbf{M}\ddot{\mathbf{x}}_0 + \mathbf{C}\dot{\mathbf{x}}_0 = \mathbf{0} \] from which we obtain for the initial acceleration, \[ \ddot{\mathbf{x}}_0 = -\mathbf{M}^{-1}\mathbf{C}\dot{\mathbf{x}}_0 \] In summary, by using the initial conditions: \[ \mathbf{x}_0 = \mathbf{0} \] \[ \dot{\mathbf{x}}_0 = \mathbf{M}^{-1} \mathbf{1}_j \] \[ \ddot{\mathbf{x}}_0 = -\mathbf{M}^{-1}\mathbf{C}\dot{\mathbf{x}}_0 \] we can compute the IRF of a system through numerical integration.

Newmark-\(\beta\) method

From Equation 15, the dynamic equilibrium at timestep \(n+1\) can be written as, \[ \mathbf{M}\ddot{\mathbf{x}}_{n+1} + \mathbf{C}\dot{\mathbf{x}}_{n+1} + \mathbf{K}\mathbf{x}_{n+1} = \mathbf{f}_{n+1}. \tag{7}\]

To solve Equation 7, the Newmark-\(\beta\) method approximates the displacement and velocity at \(n+1\) by, \[ \mathbf{x}_{n+1} = \overbrace{\mathbf{x}_n + \Delta_t \dot{\mathbf{x}}_n + \Delta_t^2(0.5-\beta) \ddot{\mathbf{x}}_n}^{{\rm Predictor: \,} \mathbf{\tilde{x}}_{n+1} } + \overbrace{\Delta_t^2 \beta \ddot{\mathbf{x}}_{n+1}}^{{\rm Corrector}} \tag{8}\] and \[ \dot{\mathbf{x}}_{n+1} = \overbrace{\dot{\mathbf{x}}_{n} + \Delta_t (1-\gamma) \ddot{\mathbf{x}}_n}^{{\rm Predictor: \,} \dot{\mathbf{\tilde{x}}}_{n+1} } + \overbrace{\Delta_t \gamma \ddot{\mathbf{x}}_{n+1}}^{{\rm Corrector}} \tag{9}\] where the constants \(0\leq\gamma\leq 1\) and \(0\leq\beta\leq 1\) control the relative weighting of the \(n\) and \(n+1\) acceleration contributions to \(\mathbf{x}_{n+1}\) and \(\dot{\mathbf{x}}_{n+1}\), and also the stability properties of the scheme. A common choice that yields an implicit and unconditionally stable scheme is \(\gamma = 1/2\) and \(\beta = 1/4\) (termed average constant acceleration method and equivalent to the trapezoidal integration rule).

Note that at a given time step \(n\), the acceleration term \(\ddot{\mathbf{x}}_{n+1}\) is not available and so Equation 8 and Equation 9 cannot be computed directly. The idea behind the Newmark-\(\beta\) method is to approximate \(\mathbf{x}_{n+1}\) and \(\dot{\mathbf{x}}_{n+1}\) using the terms that are available – these are termed the predictors \(\mathbf{\tilde{x}}_{n+1}\) and \(\dot{\mathbf{\tilde{x}}}_{n+1}\) – and later correct for the missing terms when an estimate for the acceleration \(\ddot{\mathbf{x}}_{n+1}\) is available.

Substituting Equation 8 and Equation 9 into Equation 7 we \[ \mathbf{M}\ddot{\mathbf{x}}_{n+1} + \mathbf{C}\left[\dot{\mathbf{\tilde{x}}}_{n+1} + \Delta_t \gamma \ddot{\mathbf{x}}_{n+1} \right] + \mathbf{K}\left[\mathbf{\tilde{x}}_{n+1} + \Delta_t^2 \beta \ddot{\mathbf{x}}_{n+1} \right] = \mathbf{f}_{n+1} \tag{10}\] and moving all predictor terms to the right hand side we have, \[ \mathbf{M}\ddot{\mathbf{x}}_{n+1} + \mathbf{C} \Delta_t \gamma \ddot{\mathbf{x}}_{n+1} + \mathbf{K} \Delta_t^2 \beta \ddot{\mathbf{x}}_{n+1} = \mathbf{f}_{n+1} - \mathbf{C}\dot{\mathbf{\tilde{x}}}_{n+1} - \mathbf{K}\mathbf{\tilde{x}}_{n+1} \tag{11}\] Factoring out the acceleration term on the left hand side, \[ \overbrace{\left(\mathbf{M} + \mathbf{C} \Delta_t \gamma + \mathbf{K} \Delta_t^2 \beta \right)}^{\mathbf{A}} \ddot{\mathbf{x}}_{n+1} = \mathbf{f}_{n+1} - \mathbf{C}\dot{\mathbf{\tilde{x}}}_{n+1} - \mathbf{K}\mathbf{\tilde{x}}_{n+1} \tag{12}\] we can compute the inverse of the constant terms and solve for \(\ddot{\mathbf{x}}_{n+1}\), \[ \ddot{\mathbf{x}}_{n+1} = \mathbf{A}^{-1}\left(\mathbf{f}_{n+1} - \mathbf{C}\dot{\mathbf{\tilde{x}}}_{n+1} - \mathbf{K}\mathbf{\tilde{x}}_{n+1}\right) \tag{13}\] After computing the acceleration \(\ddot{\mathbf{x}}_{n+1}\), we can correct our estimates of displacement and velocity, \[ \mathbf{x}_{n+1} = \mathbf{\tilde{x}}_{n+1} + \overbrace{\Delta_t^2 \beta \ddot{\mathbf{x}}_{n+1}}^{{\rm Corrector}} \] and \[ \dot{\mathbf{x}}_{n+1} = \dot{\mathbf{\tilde{x}}}_{n+1} + \overbrace{\Delta_t \gamma \ddot{\mathbf{x}}_{n+1}}^{{\rm Corrector}} \] We now have estimates for \(\mathbf{x}_{n+1}\), \(\dot{\mathbf{x}}_{n+1}\) and \(\ddot{\mathbf{x}}_{n+1}\) and can proceed to the next time step.

In the code snippet below implement the Newmark-\(\beta\) method and apply it to our discrete mass-spring system to compute its IRF. To do this use the initial velocity condition \(\dot{\mathbf{x}}_0 = \mathbf{1}_j m_j\) as described above. We see that the resulting IRF is in agreement with that of the modal synthesis before.

Code: compute the IRF using Newmark method and compare against modal solution and scipy.integrate.ode
# Define Newmark functions:
def newmarkImp(M, C, K, L, dt, e, gamma=0.5, beta=0.25, type='initForce'):

    ndof = M.shape[0]  # Number of DOFs
    tSteps = int(L/dt)

    if type == 'initForce':  # Initial force applied by acceleration IC
        Pj = np.zeros((ndof,))
        Pj[e] = 1
        u = np.zeros((ndof, tSteps))
        ud = np.zeros((ndof, tSteps))
        udd = np.zeros((ndof, tSteps))
        udd[:, 0] = np.linalg.inv(M)@Pj*2/dt
        f = np.zeros((ndof, tSteps))
        f[e, 0] = 0
    elif type == 'stepVel':  # Initial force applied by equivalent initial velocity IC
        Pj = np.zeros((ndof,))
        Pj[e] = 1
        u = np.zeros((ndof, tSteps))
        ud = np.zeros((ndof, tSteps))
        udd = np.zeros((ndof, tSteps))
        ud[:, 0] = np.linalg.inv(M)@Pj
        udd[:, 0] = -np.linalg.inv(M)@C@ud[:, 0]
        f = np.zeros((ndof, tSteps))
        f[e, 0] = 0
    elif type == 'secondForce':  # Dirac force at second time step
        Pj = np.zeros((ndof,))
        Pj[e] = 1
        u = np.zeros((ndof, tSteps))
        ud = np.zeros((ndof, tSteps))
        udd = np.zeros((ndof, tSteps))
        f = np.zeros((ndof, tSteps))
        f[e, 1] = 1/dt

    t = np.linspace(0, L-dt, int(L/dt))  # Time array
    A = np.linalg.inv(M+gamma*dt*C+beta*(dt**2)*K)  # Acceleration update matrix

    for tn in range(1, t.shape[0]):  # Start loop at second time step
        # Prediction step:
        # Displacement prediction based on previous u, ud and udd
        u[:, tn] = u[:, tn-1]+dt*ud[:, tn-1]+(0.5-beta)*dt**2*udd[:, tn-1]
        # Velocity prediction based on previous ud and udd
        ud[:, tn] = ud[:, tn-1]+(1-gamma)*dt*udd[:, tn-1]
        # Acceleration prediction based on current u and ud
        udd[:, tn] = A@(f[:, tn]-K@u[:, tn]-C@ud[:, tn])

        # Correction step:
        # Displacement update using prediced udd
        u[:, tn] = u[:, tn]+dt**2*beta*udd[:, tn]
        # Velocity update using prediced udd
        ud[:, tn] = ud[:, tn]+dt*gamma*udd[:, tn]
        
    return u, ud, udd

# %% Define system parameters and initial conditions
ndof = 8  # Number of DoFs

u_NM, ud_NM, udd_NM = newmarkImp(M, C_prop, K, T, dt, 0, gamma=0.5, beta=0.25,
                                 type='stepVel')


fig, ax = plt.subplots()
ax.plot(t, x[5,:],label='Modal')
ax.plot(t, u_NM[5,:],'--',label='Newmark')
ax.grid(True)
ax.set_xlabel('Time (S)')
ax.set_ylabel('Displacement (m)')
ax.set_xlim([0,5])
plt.legend()
plt.show() 
Figure 4: Impulse response of mass-spring system generated using second order Newmark integration. Excitation at mass 1, response at mass 5.

First order integration

There exists a wide range of numerical methods for solving ordinary differential equations (ODE). Many of these are designed to solve general first-order Initial Value Problems (IVPs). These are problems that take the form, \[ \frac{dy}{dt} = f(y,t) \quad \quad \mbox{with} \quad \quad y(t_0) = y_0 \tag{14}\] where \(y(t)\) is the sought after solution, \(f(y,t)\) is a function that describes its derivative with respect to \(t\) and \(y_0\) is the initial value of \(y\) at \(t=0\). The fact that we have limited ourselves to first order ODE here (meaning that no higher order derivatives are present in Equation 14) at first sounds somewhat limiting. However, as we will see shortly, higher-order ODEs can be converted into a larger system of first-order equations by introducing extra variables, and so Equation 14 is broadly applicable to most IVPs.

The advantage of working with a first order equation, as in Equation 14, is that we can avoid having to code our own time stepping integration algorithm and instead use one of the many preexisting functions/packages such as ODE45 in MATLAB, or scipy.integrate.ode in Python. It is beyond the scope of this tutorial to give a detailed description of the various methods available to solve Equation 14; this information is available through the relevant MATLAB/scipy documentation. In the numerical example below we will use the standard scipy.integrate.ode function.

Our first job is to convert our second order ODE into a first order ODE. Recall our EoM for a general MDOF system, \[ \mathbf{M}\ddot{\mathbf{x}}(t) + \mathbf{C}\dot{\mathbf{x}}(t) + \mathbf{K}\mathbf{x}(t) = \mathbf{f}(t) \tag{15}\] To convert Equation 15 into a first order ODE we combine the displacement and velocity into a single solution vector, as with their respective derivatives, \[ \mathbf{y} = \left(\begin{array}{c} \mathbf{x} \\ \dot{\mathbf{x}} \end{array}\right) \quad \mbox{and} \quad \dot{\mathbf{y}} = \left(\begin{array}{c} \dot{\mathbf{x}} \\ \ddot{\mathbf{x}} \end{array}\right) \] Inspecting Equation 15, we can rewrite our ODE in the form, \[ \dot{\mathbf{y}} = \mathbf{A} \mathbf{y} + \mathbf{B}\mathbf{f} \quad \quad \left(\begin{array}{c} \dot{\mathbf{x}} \\ \ddot{\mathbf{x}} \end{array}\right) = \left[\begin{array}{cc}\mathbf{0} & \mathbf{I} \\ \mathbf{M}^{-1}\mathbf{K} & \mathbf{M}^{-1}\mathbf{C} \end{array}\right]\left(\begin{array}{c} \mathbf{x} \\ \dot{\mathbf{x}} \end{array}\right) + \left[\begin{array}{c} \mathbf{0} \\ \mathbf{M}^{-1} \end{array}\right]\mathbf{f} \tag{16}\] where it is noted that the top row of Equation 16 simply states that \(\dot{\mathbf{x}} = \dot{\mathbf{x}}\). Notice that we have doubled the number of unknowns in our solution (\(\mathbf{x}\in\mathbb{R}^N\) but \(\mathbf{y}\in\mathbb{R}^{2N}\)), but reduced the problem to first order in \(\mathbf{y}\). Equation 16 can now be dealt with using standard IVP solvers such as ODE45 in MATLAB, or scipy.integrate.ode in Python.

In the code snippet below we use the Python scipy.integrate.ode package to solve our ODE. Note that to do this we must write a function that describes our first-order ODE, taking as inputs the solution vector \(\mathbf{y}\) and the current time step \(t_i\), as well as the system properties (\(\mathbf{M}\), \(\mathbf{C}\) and \(\mathbf{K}\)). To obtain the impulse response function as the solution, we set \(\mathbf{f} = 0\) and the initial velocity of the \(j\)th mass as \(\dot{\mathbf{x}}_{0,j} = 1/M_j\). As expected we get a near identical IRF compared to the modal summation from before. Note however, that the numerical IRF is only an approximation in the sense that numerical integration is an approximate method for computing the solution to an IVP, requiring a balance between accuracy and computation effort through the specification of error tolerances, etc.

Code: compute the IRF using scipy.integrate.ode and compare against modal solution
# Imports
from scipy.integrate import odeint

# Define ODE function
def mass_spring(y, ti, M, C, K, f, om=0):
    invM = np.linalg.inv(M)  # Pre-compute inverse mass matrix
    A = np.block([[np.zeros(M.shape), np.eye(M.shape[0])],
                  [-invM@K, -invM@C]])  # State matrix

    b = np.vstack((np.zeros((M.shape[0], 1)),
                   invM.dot(f*np.cos(2*np.pi*om*ti))))  # Input matrix = 0
    dydt = np.squeeze(A@y) + np.squeeze(b)  # Compute derivative
    return dydt


# Initial conditions (displacement, velocity):
u0 = np.zeros((numDoF,))
ud0 = np.zeros((numDoF,))
ud0[0] = 1/M[0, 0]
y0 = np.concatenate((u0, ud0))
# Applied force:
f = np.zeros((numDoF, 1))
om = np.zeros((numDoF, 1))  # Excitation frequency

# ODE integration using first-order representation
sol = odeint(mass_spring, y0, t, args=(M, C_prop, K, f, om), rtol=1e-5, atol=1e-5)
u_ODE = sol[:, 0:8].T # Displacement IRF
ud_ODE = sol[:, 8:].T # Velocity IRF

fig, ax = plt.subplots()
ax.plot(t, x[5,:],label='Modal')
ax.plot(t, u_NM[5,:],'--',label='Newmark')
ax.plot(t, u_ODE[5,:],'-.',label='ODE')
ax.grid(True)
ax.set_xlabel('Time (S)')
ax.set_ylabel('Displacement (m)')
ax.set_xlim([0,5])
plt.legend()
plt.show() 
Figure 5: Impulse response of mass-spring system generated using first order numerical integration. Excitation at mass 1, response at mass 5.

Inverse Fourier transform

Finally, we can compute the system impulse response directly from its Frequency Response Function (FRF) \(H(\omega)\) by using the inverse Fourier transform. To do so we simply need to reconstruct the complex double sided spectrum from the single sided spectrum that is \(H(\omega)\), \[ S(\vec{\omega}) = \left[\begin{array}{ccccc|cccc} H(0) & H(\omega_0) & H(\omega_1) & \cdots & H(\omega_N) & H^*(\omega_N) & H^*(\omega_{N-1}) & \cdots & H^*(\omega_0) \end{array}\right] \] With \(S(\vec{\omega})\) built, IRF is then given by, \[ G(\,\vec{t}\,) = \frac{1}{\Delta t} \mbox{iFT}\left(S(\vec{\omega}) \right) \] where \(\Delta t\) is the sample rate of the IRF and \(\mbox{iFT}(\square)\) is any standard implementation of the inverse Fourier Transform (for example using scipy.fft.ifft).

Note that there are some important relations between the sample rate \(\Delta t\), impulse length \(T\), frequency resolution \(\Delta f\) and maximum frequency \(f_N\) of the FRF that must be satisfied to obtain a corretly scaled IRF in both time and amplitude:

  • The sample rate \(\Delta t\) is inversely proportional to the so-called Nyquist frequency, which for our purposes corresponds to the maximum frequency of the measured FRF \(f_{N} = 1/(2 \Delta t)\).
  • The total number of samples \(N_{samp}\) is related to the sample rate \(\Delta t\) and the impulse length \(T\) by \(N_{samp} = \Delta t T\).
  • The frequency resolution \(\Delta f\) is related to the maximum frequency and the number of samples \(N_{samp}\) by \(\Delta f = f_{N}/N_{samp}\)

In the code snippet below we synthesis the FRF of the assembly with an appropriate \(\Delta f\) and \(f_N\) and compute the IRF by the iFFT. We see that the IRF agrees with all previous methods.

Code: compute the IRF using inverse Fourier transform of the Frequency Response Function
T = 10 # Total impulse time 
nsamp = 10000 # Number of samples
Δt = T/nsamp # Sample rate
f_nq = 1/dt/2 # Nyquist limit (max FRF frequency)
Δf = f_nq/nsamp # Frerquency resolution

f = np.linspace(0,f_nq,int(nsamp/2)) # Generate frequency array
ω = 2*np.pi*f # Convert to radians
# Direct solution:
Y_C_dir_prop = np.linalg.inv(K +1j* np.einsum('i,jk->ijk',ω, C_prop) - np.einsum('i,jk->ijk'**2, M))

doubleSidedH = np.hstack((Y_C_dir_prop[:,0,5], np.flip(Y_C_dir_prop[1:,0,5].conj()))) # Build double sided spectrum

u_FFT = 1/Δt * np.real(sp.fft.ifft(doubleSidedH))

fig, ax = plt.subplots()
ax.plot(t, x[5,:],label='Modal')
ax.plot(t, u_NM[5,:],'--',label='Newmark')
ax.plot(t, u_ODE[5,:],'-.',label='ODE')
ax.plot(t[0:-1], u_FFT[:],':',label='iFFT')
ax.grid(True)
ax.set_xlabel('Time (S)')
ax.set_ylabel('Displacement (m)')
ax.set_xlim([0,5])
plt.legend()
plt.show() 
Figure 6: Impulse response of mass-spring system generated using inverse Fourier transform of the system FRF. Excitation at mass 1, response at mass 5.

Summary

In this tutorial we have introduced the Impulse Response Function (IRF) as a time domain representation of a system’s dynamics. Knowing the IRF of a system allows us to predict its response to an arbitary time varying input force. It is a very useful tool in structural dynamics and forms the basis of Impulse-based Substructuring (IBS, a substructuring technique formulated in the time domain).

We have covered 3 different approaches for computing IRFs, including modal synthesis (only applicable to proportional damped systems), numericl integration (including both Newmark multi-step and first order methods) and inverse Fourier transform of the system FRF. All methods were demonstrated on a simple discrete mass-spring system, and shown to return the same IRF.