Discretization Techniques

In Digital Filter-Implementation we have shown how digital filters are actually implemented using difference equations. In the blogs, Filters and Controllers in control system, we have shown a couple of basic filters and the PID controller in the continuous time domain. Finally, in Digital Control Systems 1.01 and 1.02 we shown what comes along with digital systems.

In this article we show how to transfer, transfer functions defined in the continuous-time domain to discrete-time domain; discretization. There are several techniques to accomplish this; i) modeling the sampled system or ii) by using Z-transform mappings.

  1. Modeling the sampled system
    • Zero-order hold
    • First-order hold
    • Other higher order hold circuits
  2. Z-transform mappings
    • Forward euler
    • Backward euler
    • Trapeziod / Tustin
    • Tustin with pre-warp
    • Matched pole zero
    • Modified matched pole zero

In this article we only go in detail in the Z-transform mapping techniques.

Finite difference methods

The first three methods actually use the finite difference methods to approximation the derivative and obtain a mapping from the continuous (s = j\omega_c) to the discrete (z = e^{j\omega_dh}) time domain. Herein, the forward and backward Euler methods use the following approximations

    \[\dot{x}(kh) \approx \frac{x(kh +h) + x(kh)}{h} \rightarrow s = \frac{z - 1}{h}\]

    \[\dot{x}(kh) \approx \frac{x(kh) - x(kh - h)}{h} \rightarrow s = \frac{1 - z^{-1}}{h}\]

Whereas the trapeziod method, also referred as the Tustin’s method, uses the approximation

    \[\frac{\dot{x}(kh) + \dot{x}(kh + h)}{2} \approx \frac{x(kh + 1) - x(kh)}{h} \rightarrow s = \frac{2}{h}\frac{z-1}{z+1}\]

Note that the forward Euler, stable poles in s may be mapped to unstable poles in z. For both backward Euler as Tustin’s method, stable poles in s implicates stable poles in z.

Tustin with pre-warping

When determining discrete-time frequency response of the Tustin rule, H_d(z) with z = e^{j\omega_dh}, to a continuous time transfer function H_c(s), it reveals a relation between the continuous and discrete time domain; Every frequency point in continuous time domain is mapped to the discrete time domain via \omega _{c}={\frac{2}{h}}\text{tan}\left(\omega_dh/2\right) \Leftrightarrow \omega_d = \frac{2}{h}\text{arctan}\left(\omega_ch/2\right).

    \begin{gather*}H_d(z) = H_c(s)|_{s = \frac{h}{2}\frac{z-1}{z+1}} \Leftrightarrow H_d(z) = H_c\left(\frac{h}{2}\frac{z-1}{z+1}\right) \\ \Leftrightarrow H_d(e^{j\omega_dh}) = H_c\left(\frac{h}{2}\frac{e^{j\omega_dh}-1}{e^{j\omega_dh}+1}\right) \Leftrightarrow H_d(e^{j\omega_dh}) = H_c\left(j\frac{2}{h}\text{tan}\left(\frac{\omega_dh}{2}\right)\right)\end{gather*}

The tangent introduces a non-linear mapping where the distortion is small when \omega_d h is small. We can compensate for this by setting \omega_c =\frac{2}{h}\text{tan}\left(\omega_dh / 2\right) for every frequency we have control over. That is to say, first replace the critical frequencies in the continuous-time transfer functions via

    \begin{gather*}(s + \omega) \rightarrow (s + \hat{\omega}) \text{ with } \hat{\omega} = \frac{2}{h}\text{tan}\left(\omega h / 2\right) \\(s^2 + 2\beta\omega s + \omega^2) \rightarrow (s^2 + 2\beta\hat{\omega} s + \hat{\omega}^2) \text{ with } \hat{\omega} = \frac{2}{h}\text{tan}\left(\omega h / 2\right)\end{gather*}

Secondly apply the normal Tustin transformation s = \frac{2}{h}\frac{z-1}{z+1}. Finally, scale the static gain of the discrete transfer function to match the gain of the continuous-time transfer function, e.g., H_c(j\omega_0) = k \cdot H_d\left(e^{j\omega_0 h}\right) with \omega_0 = 0 and solving for k.

Remark that most programs, such as Matlab only offer to replace one critical frequency via their discretization (c2d) function. They apply the transformation

    \[s = \frac{\omega_0}{\text{tan}\left(\omega_0 h / 2\right)}⁡ \frac{z - 1}{z + 1}\]

In which \omega_0 is the frequency chosen by the user to be matched in both continuous as discrete time domain.

Matched Pole Zero

This method maps the poles and zeroes according the the relation z = e^{sh}. That is to say,

    \begin{gather*}(s + \omega) \rightarrow z - e^{-\omega h} \\(s + a)^2 + b^2 \rightarrow z^2 - 2\left(e^{-ah}\text{cos}\left( b h \right)\right)z + e^{-2ah}\end{gather*}

If the numerator (zeros) is of lower order than the denominator (poles), add powers of (z + 1) to the numerator, until numerator and denominator are of equal order. This causes an averaging of the current and past input values, as in Tustin’s method.

However, it might be possible that it is desirable to have the output only depend on the past values. For instance, due to costly computing power. In that case, only add as much powers of (z + 1) such that the numerator is of lower order than the denominator by 1. In which case, this method is referred to as the modified matched pole zero.

Likewise, as with the Tustin transform with pre-warping, scale the static gain of the discrete transfer function. Note that this method preserves stability and provides less high frequency response error.

Frequency response example

The figure below shows the frequency response of a continuous time transfer function as well as the discretized transfer functions using the various methods discussed. The continuous-time transfer function PID controller with a second order low-pass filter and two notches located at 950 and 1150 Hz. A sampling frequency of 5000 Hz was used.

The forward and backward Euler methods cannot track the controller correctly at around 300 Hz. Furthermore, they are unable to replicate the fast dynamical behavior of the two notches. As such, this is a clear example of why one should not use this methods. Tustin’s method shows the same dynamics as the continuous time transfer function, but due to the distortion in the frequency domain the notches are not located at the correct frequency. Tustin’s method combined with pre-warping is able to track the notches but loses magnitude when reaching the Nyquist frequency. In addition there is a phase offset. The Matched Pole Zero method is able to match the magnitude response very well, but has a large difference in phase which depending on the performance criteria might become problematic.

The Matlab code below was used to generate the plots.

clear all;
close all;


% Init variables and define continuous-time system
s = sym('s');
z = sym('z');
fs = 5e3;
h  = 1/fs;

Kp    = 1;
fd    = 100;
fi    = 10;
flp   = 400;
wlp   = 2*pi*flp;
blp   = 1;
fp{1} = 950;
fz{1} = 950;
bp{1} = 0.1;
bz{1} = 0.01;
wp{1} = 2*pi*fp{1};
wz{1} = 2*pi*fz{1};
fp{2} = 1150;
fz{2} = 1150;
bp{2} = 0.1;
bz{2} = 0.005;
wp{2} = 2*pi*fp{2};
wz{2} = 2*pi*fz{2};
Gc.sym = (wp{1}^2)/(wz{1}^2)*(s^2 + 2*bz{1}*wz{1}*s + wz{1}^2) / ...
                             (s^2 + 2*bp{1}*wp{1}*s + wp{1}^2) * ...
         (wp{2}^2)/(wz{2}^2)*(s^2 + 2*bz{2}*wz{2}*s + wz{2}^2) / ...
                             (s^2 + 2*bp{2}*wp{2}*s + wp{2}^2) * ...                     
         (Kp*(1/(2*pi*fd)*s + 1 + (2*pi*fi)/s)*(2*pi*flp)^2 / ...
         (s^2 + 2*blp*(2*pi*flp)*s + (2*pi*flp)^2));

[Gc.num,Gc.den] = numden(Gc.sym);
Gc.tf = tf(sym2poly(Gc.num),sym2poly(Gc.den));

% Forward Euler
Gd1.sym = subs(Gc.sym,s,(z-1)/h);
[Gd1.num,Gd1.den] = numden(Gd1.sym);
Gd1.tf = tf(sym2poly(Gd1.num),sym2poly(Gd1.den),h);

% Backward Euler
Gd2.sym = subs(Gc.sym,s,(z-1)/(h*z));
[Gd2.num,Gd2.den] = numden(Gd2.sym);
Gd2.tf = tf(sym2poly(Gd2.num),sym2poly(Gd2.den),h);

% Tustin
Gd3.sym = subs(Gc.sym,s,2*(z-1)/(h*(z+1)));
[Gd3.num,Gd3.den] = numden(Gd3.sym);
Gd3.tf = tf(sym2poly(Gd3.num),sym2poly(Gd3.den),h);

% Tustin with pre-warping
wlp   = 2/h*tan(wlp*h/2);
flp   = wlp/(2*pi);
wp{1} = 2/h*tan(wp{1}*h/2);
wz{1} = 2/h*tan(wz{1}*h/2);
wp{2} = 2/h*tan(wp{2}*h/2);
wz{2} = 2/h*tan(wz{2}*h/2);
Gc2.sym = (wp{1}^2)/(wz{1}^2)*(s^2 + 2*bz{1}*wz{1}*s + wz{1}^2) / ...
                              (s^2 + 2*bp{1}*wp{1}*s + wp{1}^2) * ...
          (wp{2}^2)/(wz{2}^2)*(s^2 + 2*bz{2}*wz{2}*s + wz{2}^2) / ...
                              (s^2 + 2*bp{2}*wp{2}*s + wp{2}^2) * ...                     
          (Kp*(1/(2*pi*fd)*s + 1 + (2*pi*fi)/s)*(2*pi*flp)^2 / ...
          (s^2 + 2*1*(2*pi*flp )*s + (2*pi*flp)^2));   

[Gc2.num,Gc2.den] = numden(Gc2.sym);
Gc2.tf = tf(sym2poly(Gc2.num),sym2poly(Gc2.den));

Gd4.sym = subs(Gc2.sym,s,2*(z-1)/(h*(z+1)));
[Gd4.num,Gd4.den] = numden(Gd4.sym);
Gd4.tf = tf(sym2poly(Gd4.num),sym2poly(Gd4.den),h);

% Matched Pole Zero
Gd5.tf = c2d(Gc.tf,h,'matched');

% Frequency spacing
f = logspace(0,log10(5000),3000);
wc = 2*pi*f;
wd = wc(wc < 2*pi*2500);

% Plot
hold all;
xlim([1 3500]);
grid on;
legend({'Continuous-time','Forward Euler','Backward Euler', ...
    'Tustin','Tustin with pre-warping','Matched Pole Zero'});

Control System & PID controller

Control systems, as it is being taught on many universities in the world, regulates the behavior of a system using feedback. Systems may be mechanical, electrical, biological, economical or any other domain. Feedback as in many aspects of life is crucial. A target value r for an essential variable of the system, the process value y, is set. The target value is also referred as setpoint or reference value, whereas the process value is referred as measurement value. The control system calculates the difference between the setpoint and the process value, called the error e of the control system.

From this point we will focus mainly on the structure, design and implementation of the linear feedback controller. The figure below shows the structure of the control system with a linear feedback controller. We neglect any possible source of disturbance.

The linear feedback controller is often the combination a controller and multiple filters. The controller of the control system computes a control value u which is applied to the process and tries to regulate the error e to zero. A filter also often referred as compensator, is tries to improve the characteristics and performance.

The most famous feedback controller is the Proportional–Integral–Derivative (PID) controller is defined by (1). Based on the error it calculates a correction to be applied to the process based on proportional, integral, and derivative terms, respectively. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change.

(1)   \begin{equation*}  H(s) = K_d \cdot s + K_p + K_i \frac{1}{s} \end{equation*}

Herein, K_d = K_p / (2\pi f_d) and K_i = K_p 2 \pi f_i for which f_d and f_i denote the frequency at which the derivative and integral action start to become active. In addition K_p denotes the proportional gain of the controller. An ideal derivative is not causal – its output depends on future inputs see (2) – therefore, the PID controller is often combined with either a first or a second order low-pass filter (3). This makes the PID controller causal, depending only on past and current inputs but not future inputs.

(2)   \begin{equation*}  \frac{\mathrm{d}f(t)}{\mathrm{d}t} \triangleq \lim_{h \rightarrow 0} \frac{f(t + h) - f(t)}{h} \end{equation*}

(3)   \begin{equation*}  H(s) = \left(K_d \cdot s + K_p + K_i \frac{1}{s}\right) \frac{\omega_{\text{lp}}^2}{s^2 + 2\beta\omega_{\text{lp}}s + \omega_{\text{lp}}^2} = \frac{\left(K_d s^2 + K_p s + K_i\right) \omega_{\text{lp}}^2}{s^3 + 2\beta\omega_{\text{lp}}s^2 + \omega_{\text{lp}}^2s} \end{equation*}

Note that the equation (3) results in a third order polynomial. With some algebra this can be reduced into a first order and second order polynomial. That is to say;

(4)   \begin{equation*} H(s) = \frac{K_i}{s} + \frac{\left(\omega^2_\text{lp} K_d - K_i\right)s + \left(\omega^2_\text{lp} K_p - 2\beta\omega_\text{lp} K_i\right)}{s^2 + 2 \beta \omega_\text{lp} s + \omega_\text{lp}^2}\end{equation*}

As such, the integral action is decomposed from the derivative and gain action along with the second order low-pass filter. With the integrator in parallel, it is possible to perform a saturation or anti-windup on the integrator signal.

Below you see an interactive bode diagram of a PID controller combined with a second order low-pass filter with some typical values. You clearly see the integrator action up to 10 [Hz], followed by the proportional gain action from 10 to 25 [Hz]. The derivative term is visible from 25 to 300 [Hz] and eventually the low-pass filter from 300 [Hz] and further.

Implementation of a filters

The discrete-time transfer function H(z) is often obtained from its counterpart, the continuous-time transfer function H(s) via discretization. A discrete-time transfer function has the following form:

(1)   \begin{equation*}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{i = 0}^{N} b_i z^i}{\sum_{j = 0}^{M} a_j z^j}\end{equation*}

Herein, X(z) is the input and Y(z) is the output of the system, N and M are the degree of the numerator and denominator, respectively. Where N \leq M, meaning we are dealing with a proper transfer function. While (1) is valid for any order it is not recommended to directly use transfer functions of high order. These can namely introduce numerical problems very quickly. Rather factorize the numerator and denominator of (1) into a cascade of first and second order polynomials.

    \begin{equation*}H(z) = K \frac{\displaystyle \prod_{i = 0}^V \left( z + b_{0i} \right)}{\displaystyle \prod_{i = 0} ^W\left( z + a_{0i} \right)} \frac{\displaystyle \prod _{i = 0}^N \left( z^2 + b_{1i} z + b_{2i} \right)}{\displaystyle\prod_{i = 0}^M \left( z^2 + a_{1i} z + a_{2i} \right)}\end{equation*}

Now let us look at the simple discrete-time transfer function of order two:

    \begin{equation*}H(z) = \frac{b_0 z^2 + b_1 z + b_2}{z^2 + a_1 z + a_2}\end{equation*}

This function is non-causal, because it depends on future inputs. Therefor, both the numerator and denominator are multiplied by reciprocal of the highest order of z occuring in the denominator, in this case z^{-2}, to make the system causal. Hence, we obtain:

    \begin{equation*}H(z) = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2}}{1 + a_1 z^{-1} + a_2 z^{-2}}\end{equation*}

Followingly, using the linearity and time-shifting properties of the \mathcal{Z}-transform, i.e., \mathcal{Z}(a_1 x_1[n] + a_2 x_2[n]) = a_1 X_1(z) + a_2 X_2(z) and \mathcal{Z}(q^{-k}x[n]) = z^{-k}X(z), we obtain the difference equation. Remark that the shift operator q is defined as q^kx[n] = x[n+k], the forward shift operation and q^{-k}x[n] = x[n-k], the backward shift (delay) operator. As a result we obtain,

(2)   \begin{equation*}y[n] = \frac{b_0 + b_1 q^{-1} + b_2 q^{-2}}{1 + a_1 q^{-1} + a_2 q^{-2}} x[n]\end{equation*}

Rewriting (2) gives us:

    \begin{gather*}\left( 1 + a_1 q^{-1} + a_2 q^{-2} \right) y[n] = \left( b_0 + b_1 q^{-1} + b_2 q^{-2} \right) x[n] \\\Leftrightarrow y[n] + a_1 y[n-1] + a_2 y[n-2] = b_0 x[n] + b_1 x[n-1] + b_2 x[n-2] \\\Leftrightarrow y[n] = b_0 x[n] + b_1 x[n-1] + b_2 x[n-2] - a_1 y[n-1] - a_2 y[n-2]\end{gather*}

This last equation is the difference equation which we can easily implement on our digital platform. Numerous methods exists on how to implement a filter. Four of these methods are closely related to each other. These are:

  • Direct form I
  • Direct form II
  • Transposed direct form I
  • Transposed direct form II

Direct form I

The direct form I is an FIR filter followed by an IIR filter. That is to say, it implements Y(z) followed by \frac{1}{X(z)}

(3)   \begin{equation*}y[n] = b_0 x[n] + b_1 x[n-1] + b_2 x[n - 2] - a_1 y[n-1] - a_2 y[n-2]\end{equation*}

In an algorithm you can implement it as:

yk = b0 * xn + b1 * x1 + b2 * x2 - a1 * y1 - a2 * y2
x2 = x1
x1 = xn
y2 = y1
y1 = yn

Where, x1, y1, x2 and y2 are the four state variables.

Direct form II

The direct form I is an IIR filter followed by an FIR filter. Which implements \frac{1}{X(z)} followed by Y(z).

Signal flow diagram of direct form II. Where s[n] a state variable

(4)   \begin{align*}s[n] &= x[n] - a_1 s[n-1] - a_2 s[n-2] \\y[n] &= b_0 s[n] + b_1 s[n-1] + b_2 s[n-2]\end{align*}

In an algorithm you can implement it as:

s0 =      xn - a1 * s1 - a2 * s2
yn = b0 * s0 + b1 * s1 + b2 * s2
s2 = s1
s1 = s0

Where, s0, s1, s2 are the three state variables.

Direct form I transposed

Both direct forms can be converted to an equivalent transposed form via:

  • Reverse direction of each interconnection
  • Reverse direction of each multiplier
  • Change junctions to adders and vice-versa
  • Interchange the input and output signals
Signal flow diagram of direct form I transposed.

(5)   \begin{align*}y[n] &= b_0 v[n] + b_1 v[n-1] + b_2 v[n-2] \\v[n] &= x[n] - a_1 v[n-1] - a_2 v[n-2] \\\end{align*}

In an algorithm you can implement it as:

vn = xn + s2
yn = s4 + b0 * vn
s4 = s3 + b1 * vn
s3 = b2 * vn
s2 = s1 - a1 * vn
s1 = - a2 * vn

Note that this form is very inefficient. You can easily shift the delays to the center, obtaining the direct form II.

Direct form II transposed

Signal flow diagram of direct form II transposed.

(6)   \begin{align*}s_1[n] &= b_2 x[n-1] - a_2 y[n-1] \\s_2[n] &= b_1 x[n-1]  - a_1 y[n-1] + s_1[n-1] \\y[n] &= b_0 x[n] + s_2[n]\end{align*}

In an algorithm you can implement it as:

yn = s2 + b0 * xn
s2 = s1 + b1 * xn - a1 * yn
s1 = b2 * xn - a2 * yn

Filters in Control Systems

In this blog I want to elaborate on the various types of filters that are being used in Control Systems. These filters are often used in combination with a PI, PD or PID controller to obtain a robust controller. The filters which we will discuss are:

  • Low-Pass filter
  • High-Pass filter
  • Lead-Lag filter
  • Notch filter

Using these four filters we can create other filter types, such as a Band-Stop or Band-Pass filter. The behavior of each filter can be captured by a transfer function in the continuous-time using the Laplace domain s of either first and/or second order. We use the transfer functions to describes the filter H(s) = Y(s) / U(s) and as such the relation between the input U(s) and the output Y(s). Throughout this blog we will write the equations in the form of their angular frequency \omega in [rad/s], also known as radian frequency. However, we will specify filters using their frequency f in [Hz]. Using the radial frequency notation results in a more visual compact formula. Additionally, it is also possible to specify the filters in form of their time constant \tau in [s]. The following relation holds between the angular frequency, frequency and time constant.

    \begin{equation*} \omega \triangleq 2\pi f \triangleq \frac{1}{\tau} \end{equation*}

Many forms are used within literature, one book will use angular frequencies, the other will use time-constants. Finally, for second order filters, the only filter with possibly complex poles or zeros, can be written in various ways. We will specify second order filters in terms of the damping \beta of the corresponding frequency. It is also possible to describe these formulas using the Quality factor Q. Whereas \beta describes how oscillations decay in a system after a disturbance, Q describes how underdamped the system is. The following relation holds between \beta and Q

    \begin{equation*} \beta \triangleq \frac{1}{2Q} \end{equation*}

Low-pass filter

A low-pass filter is used to pass signals with a frequency lower than a certain cut-off frequency f_\text{lp}. Below the formulas for both the first- and second-order low-pass filter is given.

    \begin{equation*} H(s) = K\cdot\frac{\omega_{\text{lp}}}{s + \omega_{\text{lp}}} \quad\quad H(s) = K\cdot\frac{\omega_{\text{lp}}^2}{s^2 + 2\beta\omega_{\text{lp}}s + \omega_{\text{lp}}^2} \end{equation*}

Herein, K denotes the gain, f_\text{lp} denotes the low-pass cut-off frequency and \beta_\text{lp} denotes the damping. Whereas the first-order supresses with 20 [dB/dec], the second-order supresses with 40 [dB/dec]. Low frequent the filter gain is K.

High-pass filter

The complement of a low-pass filter is a high-pass filter. This filter is used to pass signals with a frequency higher than a certain cut-off frequency f_\text{hp}. Below the formulas for both the first- and second-order high-pass filter is given.

    \begin{equation*} H(s) = K\cdot\frac{s}{s + \omega_{\text{hp}}} \quad\quad H(s) = K\cdot\frac{s^2}{s^2 + 2\beta\omega_{\text{hp}}s + \omega_{\text{hp}}^2} \end{equation*}

Herein, K denotes the gain, f_\text{hp} denotes the high-pass cut-off frequency and \beta_\text{hp} denotes the damping. Likewise as the low-pass filter the first-order supresses frequencies with 20 [dB/dec] and the second-order with 40 [dB/dec].

Lead-lag filter

A lead-lag filter, also known as a lead-lag compensator, is often mainly used for phase compensation rather then magnitude. Below the formula for a lead or lag filter is shown.

    \begin{equation*} H(s) = K\cdot\frac{\omega_\text{p}}{\omega_\text{z}}\cdot\frac{s + \omega_\text{z}}{s + \omega_\text{p}} \end{equation*}

Herein, f_\text{p} and f_\text{z} denote the frequency of the pole and zero, respectively. The filter functions as a lead filter if f_\text{p} > f_\text{z}  and otherwise as a lag filter. The filter has its maximum or minimum phase at \sqrt{f_\text{p}f_\text{z}}. Finally, at f = \infty the filter has a gain of Kf_\text{p} / f_\text{z} or Kf_\text{z} / f_\text{p} in case of a lead or lag filter, respectively. Naturally, the filter can be cascaded with itself by which a the filter can be a lead and lag filter simultaneously.

Notch filter

A notch filter is often used to filter undesired resonance peaks. Below the formula for a notch filter is shown.

    \begin{equation*} H(s) = K\cdot\frac{\omega_\text{p}}{\omega_\text{z}}\cdot\frac{s^2 + 2\beta_\text{z}\omega_\text{z}s + \omega_\text{z}^2}{s^2 + 2\beta_\text{p}\omega_\text{p}s + \omega_\text{p}^2} \end{equation*}

Herein, f_\text{p} and f_\text{z} denote the frequency of the pole and zero, respectively. Likewise, \beta_\text{p} and \beta_\text{z} denote the damping of the pole and zero and K is as usual the gain. When f_\text{z} = f_\text{z} the notch will target one specific frequency. The gain at that frequency is given by \beta_\text{z} / \beta_\text{p}. When f_1 \neq f_2 the notch filter is also referred as a skewed notch and the difference between gain at low and high frequencies is given by (f_\text{p} / f_\text{z})^2.

Digital Control Systems 1.02

In the previous blog we introduced the various types of signals present in a digital control system and highlighted the pros and cons. In this blog we take a a closer look at the digital control system.

A digital control system, can only cope with digital signals. That is because it is inherently discrete in time and space. This is caused by the processors clock rate and due to the finite word length of a digital system. For instance, a 32 [bit] computer which runs at 4 [GHz] has a time resolution of 0.25 [ns] and the value of an unsigned integer can only range from 0 to 232 − 1.

Figure 1. Digital control system in its most elementary form. Continuous-time signals are in solid and discrete-time signals are dashed.

Figure 1 shows the digital control system in its most elementary form. Herein $\mathcal{S}$ is the sampler. The sampler converts the analog output signal of the plant $y(t)$ at time $t \in \mathbb{R}$ to a digital measurement signal $y[k]$ at discrete-time $k \in \mathbb{N}$. As mentioned in the previous blog, the digital signal is obtained by sampling and quantization of the analog signal. In final, $\mathcal{H}$ denotes the recronstructor, also known as a digital-analog convertor (DAC), it converts the digital control signal $u[k]$ provided by the controller to an analog control signal $u(t)$.


Figure 2 shows that the sampler exists out of two devices namely; i) a sample and hold (SH) device and ii) a analog-digtal convertor (ADC).

Figure 2. The sampler and its sub systems.

The SH device knows two different states. The first state is called the track state at which the analog input signal is tracked by the SH device. The track state is also refereed to as sample state. However because of the ambiguous meaning of the word sample we will use the track, which is also commonly used in literature. The second state is called the hold state at which the analog input signal is being kept constant for a short period of time. The hold state is activated when the hold command is given. During the hold state the ADC is able to process and digitize the signal. The hold command can be triggered by any logic device. While there are many more aspects and subtleties to be considered with a SH device this gives a summarized overview of its functionality.

Figure 3. The track and hold of a SH. Dashed red shows the original signal, solid blue shows the SH signal.

Various different strategies exists at what time instance $t_k$ the hold command is triggered. These are called sampling strategies.

  1. Periodic sampling; the sampling instances $t_k$ are equally spaced, that is to say $t_k = kh$ with $k \in \mathbb{N}$ and $h$, given in [s], being the sampling period. See also Figure 2(a). The sampling period $h$ is often also denoted as the sampling rate or sampling frequency $f_s$, given in [Hz]. It is related to the sampling period by $h = \frac{1}{f_s}$.
  2. Multi-order sampling; a pattern of sampling instances $\{t_k\}$ are repeated periodically, as a result $t_{k+r} = t_k$. See Figure 2(b).
  3. Event-based sampling; the sampling instances $t_k$ are generated based on a event in the system. For instance, when a certain measurement threshold has been crossed. See Figure 2(c) in which the signal is sampled at every $\pm0.4x \pm 0.2$ value.
  4. Random sampling; the sampling time instances $t_k$ are chosen at random, see Figure 2(d).

Figure 2. Visualization of different sampling techniques. Red line resembles the original analog signal, blue the sampled signal. Matlab code.

Periodic sampling is the most common in industry. That is because of several reason; i) most deterministic behavior, ii) extensively been researched, iii) easiest to model and iv) easiest to obtain key performance indicators, for instance, in time- and frequency domain. Finally, proofing stability for the other sampling strategies is much harder then for periodic sampling.

Digital Control Systems 1.01

Digital control systems never did get enough attention during my Master System & Control in my opinion. In the various courses which I was taught, continuous-time control systems were always examined, but almost never digital control systems. For most courses this is also perfectly fine. However, you would expect that there would be also a more in-depth course about digital control systems into the curriculum. Especially, since almost all control systems are a digital control system.

I remember a course in which each group had to program a robot to convey pizza’s from one cabinet to another. A group mentioned, during their presentation, that they used a PID controller to control the various axis of the robot. I knew all the groups had designed their control algorithms in the continuous-time domain, therefore I asked; i) what discretization method(s) they had used, ii) if they had check the stability and iii) if they had checked the robustness (phase, gain and modulus margin) of their digital control system with respect to the continuous-time domain controller. Unfortunately none of my questions were answered. This disappointed me and was an acknowledgement of my opinion that not enough attention was given to digital control systems. Since this was one of the last courses given before one started their internship and graduation.

For the above reason, I will do a couple of blogs about digital control systems. I will try to focus on a single topic each blog and keep it clear and easy to understand; “If you can’t explain it simply, you don’t understand it well enough.”. In this first blog I will briefly go into the pros and cons of digital control systems and its analog counterpart. Followed by the introduction of the various type of signals that are present in a digital control system.

Pros and cons of digital control systems

Below is a list of pros and cons for both digital as well as analog control systems. Remark that there might be many more and that some pros or cons are subject to opinion, experience or interpretation.

Analog control Digital control
  • Robust; do not (often) break down
  • Continuous processing; no inherent bandwidth limits
  • Hard to modify; time-consuming, hardware needs to be rebuilt
  • Flexible; ability to create complicated controllers, easy to modify and fast to develop, less susceptible to aging and environmental variations
  • Robust; performance or safety can be monitored, errors can be dealt with appropriately
  • Diagnostics; on-the-fly testing, parameters can be adjusted, measurement signals can be stored
  • Low cost
  • Slow development; difficult to develop accurate designs due to tolerances in components
  • Complex: difficult to create complicated controllers, also hard to modify, testing alternatives is difficult, hard to build in intelligence, difficult to do multi-input-multi-output control
  • Disruptive noise
  • High cost
  • Discrete; signal might be inaccurate due to finite word length, time delays in control loop, discrete controller, stability might be an issue
  • Complex; complicated controllers, difficult to obtain a high bandwidth

While analog control systems have considerable disadvantages the flexibility of digital control systems stands-out. The latter made sure that companies adopted digital over their counterpart because it opened multiply doors. For instance, the rapid development of a complex controllers which incorporated optimization algorithms to maximize performance and profit. All created at the tenth of the costs for the same analog control system.

Types of signals in a digital control systems

The main disadvantage of digital control systems is found in the fact that one has to deal with discrete variables instead of continuous ones. Consequently, let us define the different types of signals. A Continuous-time signal is a signal for which every point in the time domain, this can be infinitely, an amplitude is defined. Similarly, a discrete-time signal is a signal for which only a sequence of points in time an amplitude is defined. Note that these points in time do necessarily have to be equidistant. In case it is, the signal has an associated periodic interval. Likewise to the domain, signals may also be continuous or discrete with respect to their range, i.e., amplitude.

Summarized; in general signals can either be continuous or discrete with respect to their domain or range. Herein, the domain is in general time, in case of control systems and the range can be anything; Voltage, Amperes, Newtons, meter per second, et cetera. Figure 1 shows the four possible combinations.

Figure 1. The four possible combinations of signal types.
Figure 1. The four possible combinations of signal types. Matlab code.

In Figure 1(a) an analog signal is displayed, which is both continuous in time and amplitude. Figure 1(b) shows a sampled signal; continuous in amplitude and discrete in time; Figure 1(c) the quantized signal, discrete in amplitude and continuous in time. Finally Figure 1(d), the digital signal. This signal is both discrete in time and amplitude. Remark: the exact definition of a quantized signal and sampled signal are often ambiguous. Often it is implicitly assumed that a quantized signal is also sampled, e.g., discrete in time. Likewise, the sampled signal is sometimes in explicitly assumed to be quantized as well. Lastly, the digital signal can be obtained by sampling and quantization of the analog signal.

We have read about the pros and con, and defined the different signal types present in a digital control system. Conclusively, I want you to point to the books in the references below, these are the best literature references in this field. I also use them as a reference for these blogs, and will try to point or refer to a chapter or section from the book when addressing a certain topic. In the next blog we will focus on the different system components of a digital control system.


[1] Levine, William S., ed. “The Control Handbook: Control System Fundamentals”. CRC press, 2010.
[2] Åström, Karl J., and Björn Wittenmark. “Computer-controlled systems: theory and design”. Courier Corporation, 2013.
[3] Franklin, Gene F., et al. “Feedback control of dynamic systems”. Vol. 3. Reading, MA: Addison-Wesley, 1994.
[4] Wittenmark, Björn, Karl Johan Åström, and Karl-Erik Årzén. “Computer control: An overview”. IFAC Professional Brief, 2002.
[5] Ogata, K. “Discrete-Time Control Systems”. Pearson Education, 1995.