Iteration & Refinement Board

When working during a iteration (or referred as sprint) within the Scrum framework the team can use a Kanban board. “Kanban” is the Japanese word for “visual signal” and is like Scrum, a framework within the Agile model. A Kanban board is used to visualize the work to be done as cards on a board, in different states. This allows you to easily see the “big picture” of where the project currently stands. As well as identify potential bottlenecks that could affect productivity. When you are using Scrum and Kanban in combination this is also referred as Scrumban.

Kanban board elements
An Kanban board with its key elements. Cards are meant to go from the left to the right column which denotes that progress has been made. The columns denote the different state the cards are in. Work-In-Progress (WIP) limits can be set per column. Swimming lanes can be used to separate different types of activities, e.g., teams, persons, feature, et cetera. Additional markers, visualizations, colors can be used on cards to further identify the work. For instance, a marker to identify how much work the task is.

One can choose to use an physical board or a digital board, or even both. Both have pros and cons. One of the benefits of having a digital board is that most digital boards automatically add all kind of details on the cards of the board. Such as the person working on the card, the amount of story points, the priority, the status and more. In addition, digital boards are flexible. You can filter, search or change them very quickly depending on your needs at that moment.

I want to share with you how we setup our Kanban board for the iteration execution and the backlog refinement. Keeping in mind that we use the Scaled Agile Framework. Within our work environment we use JIRA as the issue-tracking and project-management software. As such we have a digital board. There are many alternatives for JIRA, such as Microsoft Planner, Trello, et cetera.

Iteration Board

During the daily stand-up the iteration board (Scrum board, storyboard, Kanban board) is used to check upon the progress of the iteration. The progress is defined using 4 separate columns and different statuses. Furthermore we use swimming lanes to categorize work per feature.

ColumnStatusWhat
ToDon.a.Iteration backlog, issues which were planned during the iteration planning event. Currently in waiting to be picked-up by the team.
In Progress• In progress
• Testing
• Analysis
Issues which are in progress. Testing and analysis are added as status to give a little extra context.
Waiting• Review
• Blocked
Issues which are in waiting, such as a review or being blocked for any reason, e.g., due to a dependency to a different team or supplier.
Donen.a.Issues which are resolved.

Additionally one could add an extra column called “Verify”. Team members put the card to “Verify” when finished. The Product Owner of the team asses the card to determine if it is really done. If so, he moves it to done. This makes the handshake between the Product Owner and the team more vivid.

Backlog Refinement

The backlog refinement board is created to check and make the refinement of the backlog more visual and explicit. During the iteration the board is impromptu checked by the team. Likewise as the iteration board it has different columns to distinguish the status of refinement. Additionally “exit” criteria are defined. These elaborate on when a card is allowed to be moved to the next column.

ColumnWhatExit criteria
FunnelIssues which are created but still need further refinement. Anything can be put here.• Assigned to Program Increment
• Feature is known
RefinementIssues which need to be refined more. • Title gives context
• Assigned to a person
• Acceptance criteria known
• Story points known
• Team definition of ready is applied
• Assigned to iteration
ReviewIssues which are refined by the team and need to be reviewed by the Product Owner.• Reviewed by Product Owner
• Handshake between Product Owner and team
BacklogIssues which are agreed upon and are ready to be picked-up in an iteration.No exit criteria, issue is now assigned to iteration and can be picked-up by the team.
RejectedIssues which are rejected by the Product Owner and the team.Not applicable

My Movie & TV Series Viewing Behavior Analyzed

Somewhere in 2008 I made it a habit to add the movies and TV series I watched on my “watchlist” on IMDB. The watchlist is a simple way to keep track of what you watched and what not. Currently there are 1391 titles on this list, click here to download it. Now I was quiet curious how much time it would cost if one would watch all titles in one go.

This accumulated to the astonishing number of 386156 minutes that is equivalent to watching 268 days 3 hours and 56 minutes non-stop. Of this amount this is 103 days 4 hours and 26 minutes watching movies and 164 days 23 hours and 30 minutes of TV series.

Because I was quiet flabbergasted by that number I started doing some more research, just out of curiosity. Which genre’s did I watch the most? From which year did the most movies or TV series originated from? And since I have a Netflix subscription you can also check when you you watched what. Below are the results of the research, mainly for your enjoyment 😉. In addition, at the end of this blog is the code I used to extract the total amount of run-time from TheMovieDB. Note, the watchlist from IMDB also contains the run-time, but for a TV series it only displays the run-time per episode and not the total amount. Therefor, I used the database of TheMovieDB to obtain the correct value.

Finally I was also wondering how much money was spent on Netflix. This accumulated to euro. Quiet a number, but luckily we share our account with multiple people.

Number of entries based on the release year of the movie / tv series.
Number of entries based on genre.
Number of entries accumelated per month based on the data of Netflix
<script src="//code.jquery.com/jquery-1.12.4.min.js"></script>
<script>
// IMDB ID's, e.g., tt0111161 for The Shawshank Redemption https://www.imdb.com/title/tt0111161/
// Note this can also be ID's from other movie websites, see: https://developers.themoviedb.org/3/find/find-by-id
var imdbIDs = ["tt0111161"];
// API key from https://www.themoviedb.org/documentation/api
var apikey = "";
var moviesRunTime = [];
var seriesRunTime = [];

// Search for themovieid identifier
function getMovieDBIdentifier(imdbID) {
  var url = "https://api.themoviedb.org/3/find/" + imdbID + "?api_key=" + apikey + "&amp;language=en-US&amp;external_source=imdb_id";
  $.getJSON(url, function(json) {
    console.log("url: " + url);
    if (json.movie_results.length > 0) {
      getMovieDBRunTimeMovies(json.movie_results[0].id);
    } else if (json.tv_results.length > 0) {
      getMovieDBRunTimeSeries(json.tv_results[0].id);
    } else {
      console.log("Something else...");
    }
  });
}

// Find runtime of the movie
function getMovieDBRunTimeMovies(id) {
  var url = "https://api.themoviedb.org/3/movie/" + id + "?api_key=" + apikey + "&amp;language=en-US";
  $.getJSON(url, function(json) {
    runtime = json.runtime;
    if (!isNaN(runtime))
      moviesRunTime.push(runtime);
});
}

// Find runtime of tv series
function getMovieDBRunTimeSeries(id) {
  var url = "https://api.themoviedb.org/3/tv/" + id + "?api_key=" + apikey + "&amp;language=en-US";
  $.getJSON(url, function(json) {
    runtime = json.episode_run_time[0]*json.number_of_episodes
    if (!isNaN(runtime))
      seriesRunTime.push(runtime);
    else
      console.log("Could not calculate runtime");
  });
}

imdbIDs.forEach(id => getMovieDBIdentifier(id));
console.log("Sum movies runtime: " + moviesRunTime.reduce((a,b) => a + b,0) + " minutes");
console.log("Sum series runtime: " + seriesRunTime.reduce((a,b) => a + b,0) + " minutes");
</script>

Discretization Techniques

In Digital Filter-Implementation we have shown how digital filters are actually implemented using difference equations. In the blogs, Filters and Controllers in control system, we have shown a couple of basic filters and the PID controller in the continuous time domain. Finally, in Digital Control Systems 1.01 and 1.02 we shown what comes along with digital systems.

In this article we show how to transfer, transfer functions defined in the continuous-time domain to discrete-time domain; discretization. There are several techniques to accomplish this; i) modeling the sampled system or ii) by using Z-transform mappings.

  1. Modeling the sampled system
    • Zero-order hold
    • First-order hold
    • Other higher order hold circuits
  2. Z-transform mappings
    • Forward euler
    • Backward euler
    • Trapeziod / Tustin
    • Tustin with pre-warp
    • Matched pole zero
    • Modified matched pole zero

In this article we only go in detail in the Z-transform mapping techniques.

Finite difference methods

The first three methods actually use the finite difference methods to approximation the derivative and obtain a mapping from the continuous (s = j\omega_c) to the discrete (z = e^{j\omega_dh}) time domain. Herein, the forward and backward Euler methods use the following approximations

    \[\dot{x}(kh) \approx \frac{x(kh +h) + x(kh)}{h} \rightarrow s = \frac{z - 1}{h}\]

    \[\dot{x}(kh) \approx \frac{x(kh) - x(kh - h)}{h} \rightarrow s = \frac{1 - z^{-1}}{h}\]

Whereas the trapeziod method, also referred as the Tustin’s method, uses the approximation

    \[\frac{\dot{x}(kh) + \dot{x}(kh + h)}{2} \approx \frac{x(kh + 1) - x(kh)}{h} \rightarrow s = \frac{2}{h}\frac{z-1}{z+1}\]

Note that the forward Euler, stable poles in s may be mapped to unstable poles in z. For both backward Euler as Tustin’s method, stable poles in s implicates stable poles in z.

Tustin with pre-warping

When determining discrete-time frequency response of the Tustin rule, H_d(z) with z = e^{j\omega_dh}, to a continuous time transfer function H_c(s), it reveals a relation between the continuous and discrete time domain; Every frequency point in continuous time domain is mapped to the discrete time domain via \omega _{c}={\frac{2}{h}}\text{tan}\left(\omega_dh/2\right) \Leftrightarrow \omega_d = \frac{2}{h}\text{arctan}\left(\omega_ch/2\right).

    \begin{gather*}H_d(z) = H_c(s)|_{s = \frac{h}{2}\frac{z-1}{z+1}} \Leftrightarrow H_d(z) = H_c\left(\frac{h}{2}\frac{z-1}{z+1}\right) \\ \Leftrightarrow H_d(e^{j\omega_dh}) = H_c\left(\frac{h}{2}\frac{e^{j\omega_dh}-1}{e^{j\omega_dh}+1}\right) \Leftrightarrow H_d(e^{j\omega_dh}) = H_c\left(j\frac{2}{h}\text{tan}\left(\frac{\omega_dh}{2}\right)\right)\end{gather*}

The tangent introduces a non-linear mapping where the distortion is small when \omega_d h is small. We can compensate for this by setting \omega_c =\frac{2}{h}\text{tan}\left(\omega_dh / 2\right) for every frequency we have control over. That is to say, first replace the critical frequencies in the continuous-time transfer functions via

    \begin{gather*}(s + \omega) \rightarrow (s + \hat{\omega}) \text{ with } \hat{\omega} = \frac{2}{h}\text{tan}\left(\omega h / 2\right) \\(s^2 + 2\beta\omega s + \omega^2) \rightarrow (s^2 + 2\beta\hat{\omega} s + \hat{\omega}^2) \text{ with } \hat{\omega} = \frac{2}{h}\text{tan}\left(\omega h / 2\right)\end{gather*}

Secondly apply the normal Tustin transformation s = \frac{2}{h}\frac{z-1}{z+1}. Finally, scale the static gain of the discrete transfer function to match the gain of the continuous-time transfer function, e.g., H_c(j\omega_0) = k \cdot H_d\left(e^{j\omega_0 h}\right) with \omega_0 = 0 and solving for k.

Remark that most programs, such as Matlab only offer to replace one critical frequency via their discretization (c2d) function. They apply the transformation

    \[s = \frac{\omega_0}{\text{tan}\left(\omega_0 h / 2\right)}⁡ \frac{z - 1}{z + 1}\]

In which \omega_0 is the frequency chosen by the user to be matched in both continuous as discrete time domain.

Matched Pole Zero

This method maps the poles and zeroes according the the relation z = e^{sh}. That is to say,

    \begin{gather*}(s + \omega) \rightarrow z - e^{-\omega h} \\(s + a)^2 + b^2 \rightarrow z^2 - 2\left(e^{-ah}\text{cos}\left( b h \right)\right)z + e^{-2ah}\end{gather*}

If the numerator (zeros) is of lower order than the denominator (poles), add powers of (z + 1) to the numerator, until numerator and denominator are of equal order. This causes an averaging of the current and past input values, as in Tustin’s method.

However, it might be possible that it is desirable to have the output only depend on the past values. For instance, due to costly computing power. In that case, only add as much powers of (z + 1) such that the numerator is of lower order than the denominator by 1. In which case, this method is referred to as the modified matched pole zero.

Likewise, as with the Tustin transform with pre-warping, scale the static gain of the discrete transfer function. Note that this method preserves stability and provides less high frequency response error.

Frequency response example

The figure below shows the frequency response of a continuous time transfer function as well as the discretized transfer functions using the various methods discussed. The continuous-time transfer function PID controller with a second order low-pass filter and two notches located at 950 and 1150 Hz. A sampling frequency of 5000 Hz was used.

The forward and backward Euler methods cannot track the controller correctly at around 300 Hz. Furthermore, they are unable to replicate the fast dynamical behavior of the two notches. As such, this is a clear example of why one should not use this methods. Tustin’s method shows the same dynamics as the continuous time transfer function, but due to the distortion in the frequency domain the notches are not located at the correct frequency. Tustin’s method combined with pre-warping is able to track the notches but loses magnitude when reaching the Nyquist frequency. In addition there is a phase offset. The Matched Pole Zero method is able to match the magnitude response very well, but has a large difference in phase which depending on the performance criteria might become problematic.

The Matlab code below was used to generate the plots.

clear all;
close all;
clc;

set(cstprefs.tbxprefs,'FrequencyUnits','Hz')

% Init variables and define continuous-time system
s = sym('s');
z = sym('z');
fs = 5e3;
h  = 1/fs;

Kp    = 1;
fd    = 100;
fi    = 10;
flp   = 400;
wlp   = 2*pi*flp;
blp   = 1;
 
fp{1} = 950;
fz{1} = 950;
bp{1} = 0.1;
bz{1} = 0.01;
wp{1} = 2*pi*fp{1};
wz{1} = 2*pi*fz{1};
 
fp{2} = 1150;
fz{2} = 1150;
bp{2} = 0.1;
bz{2} = 0.005;
wp{2} = 2*pi*fp{2};
wz{2} = 2*pi*fz{2};
 
Gc.sym = (wp{1}^2)/(wz{1}^2)*(s^2 + 2*bz{1}*wz{1}*s + wz{1}^2) / ...
                             (s^2 + 2*bp{1}*wp{1}*s + wp{1}^2) * ...
         (wp{2}^2)/(wz{2}^2)*(s^2 + 2*bz{2}*wz{2}*s + wz{2}^2) / ...
                             (s^2 + 2*bp{2}*wp{2}*s + wp{2}^2) * ...                     
         (Kp*(1/(2*pi*fd)*s + 1 + (2*pi*fi)/s)*(2*pi*flp)^2 / ...
         (s^2 + 2*blp*(2*pi*flp)*s + (2*pi*flp)^2));

[Gc.num,Gc.den] = numden(Gc.sym);
Gc.tf = tf(sym2poly(Gc.num),sym2poly(Gc.den));

% Forward Euler
Gd1.sym = subs(Gc.sym,s,(z-1)/h);
[Gd1.num,Gd1.den] = numden(Gd1.sym);
Gd1.tf = tf(sym2poly(Gd1.num),sym2poly(Gd1.den),h);

% Backward Euler
Gd2.sym = subs(Gc.sym,s,(z-1)/(h*z));
[Gd2.num,Gd2.den] = numden(Gd2.sym);
Gd2.tf = tf(sym2poly(Gd2.num),sym2poly(Gd2.den),h);

% Tustin
Gd3.sym = subs(Gc.sym,s,2*(z-1)/(h*(z+1)));
[Gd3.num,Gd3.den] = numden(Gd3.sym);
Gd3.tf = tf(sym2poly(Gd3.num),sym2poly(Gd3.den),h);

% Tustin with pre-warping
wlp   = 2/h*tan(wlp*h/2);
flp   = wlp/(2*pi);
wp{1} = 2/h*tan(wp{1}*h/2);
wz{1} = 2/h*tan(wz{1}*h/2);
wp{2} = 2/h*tan(wp{2}*h/2);
wz{2} = 2/h*tan(wz{2}*h/2);
      
Gc2.sym = (wp{1}^2)/(wz{1}^2)*(s^2 + 2*bz{1}*wz{1}*s + wz{1}^2) / ...
                              (s^2 + 2*bp{1}*wp{1}*s + wp{1}^2) * ...
          (wp{2}^2)/(wz{2}^2)*(s^2 + 2*bz{2}*wz{2}*s + wz{2}^2) / ...
                              (s^2 + 2*bp{2}*wp{2}*s + wp{2}^2) * ...                     
          (Kp*(1/(2*pi*fd)*s + 1 + (2*pi*fi)/s)*(2*pi*flp)^2 / ...
          (s^2 + 2*1*(2*pi*flp )*s + (2*pi*flp)^2));   

[Gc2.num,Gc2.den] = numden(Gc2.sym);
Gc2.tf = tf(sym2poly(Gc2.num),sym2poly(Gc2.den));

Gd4.sym = subs(Gc2.sym,s,2*(z-1)/(h*(z+1)));
[Gd4.num,Gd4.den] = numden(Gd4.sym);
Gd4.tf = tf(sym2poly(Gd4.num),sym2poly(Gd4.den),h);

% Matched Pole Zero
Gd5.tf = c2d(Gc.tf,h,'matched');

% Frequency spacing
f = logspace(0,log10(5000),3000);
wc = 2*pi*f;
wd = wc(wc < 2*pi*2500);

% Plot
figure;
hold all;
bode(Gc.tf,wc);
bode(Gd1.tf,wd);
bode(Gd2.tf,wd);
bode(Gd3.tf,wd);
bode(Gd4.tf,wd);
bode(Gd5.tf,wd);
xlim([1 3500]);
grid on;
legend({'Continuous-time','Forward Euler','Backward Euler', ...
    'Tustin','Tustin with pre-warping','Matched Pole Zero'});

Implementation of a filters

The discrete-time transfer function H(z) is often obtained from its counterpart, the continuous-time transfer function H(s) via discretization. A discrete-time transfer function has the following form:

(1)   \begin{equation*}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{i = 0}^{N} b_i z^i}{\sum_{j = 0}^{M} a_j z^j}\end{equation*}

Herein, X(z) is the input and Y(z) is the output of the system, N and M are the degree of the numerator and denominator, respectively. Where N \leq M, meaning we are dealing with a proper transfer function. While (1) is valid for any order it is not recommended to directly use transfer functions of high order. These can namely introduce numerical problems very quickly. Rather factorize the numerator and denominator of (1) into a cascade of first and second order polynomials.

    \begin{equation*}H(z) = K \frac{\displaystyle \prod_{i = 0}^V \left( z + b_{0i} \right)}{\displaystyle \prod_{i = 0} ^W\left( z + a_{0i} \right)} \frac{\displaystyle \prod _{i = 0}^N \left( z^2 + b_{1i} z + b_{2i} \right)}{\displaystyle\prod_{i = 0}^M \left( z^2 + a_{1i} z + a_{2i} \right)}\end{equation*}

Now let us look at the simple discrete-time transfer function of order two:

    \begin{equation*}H(z) = \frac{b_0 z^2 + b_1 z + b_2}{z^2 + a_1 z + a_2}\end{equation*}

This function is non-causal, because it depends on future inputs. Therefor, both the numerator and denominator are multiplied by reciprocal of the highest order of z occuring in the denominator, in this case z^{-2}, to make the system causal. Hence, we obtain:

    \begin{equation*}H(z) = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2}}{1 + a_1 z^{-1} + a_2 z^{-2}}\end{equation*}

Followingly, using the linearity and time-shifting properties of the \mathcal{Z}-transform, i.e., \mathcal{Z}(a_1 x_1[n] + a_2 x_2[n]) = a_1 X_1(z) + a_2 X_2(z) and \mathcal{Z}(q^{-k}x[n]) = z^{-k}X(z), we obtain the difference equation. Remark that the shift operator q is defined as q^kx[n] = x[n+k], the forward shift operation and q^{-k}x[n] = x[n-k], the backward shift (delay) operator. As a result we obtain,

(2)   \begin{equation*}y[n] = \frac{b_0 + b_1 q^{-1} + b_2 q^{-2}}{1 + a_1 q^{-1} + a_2 q^{-2}} x[n]\end{equation*}

Rewriting (2) gives us:

    \begin{gather*}\left( 1 + a_1 q^{-1} + a_2 q^{-2} \right) y[n] = \left( b_0 + b_1 q^{-1} + b_2 q^{-2} \right) x[n] \\\Leftrightarrow y[n] + a_1 y[n-1] + a_2 y[n-2] = b_0 x[n] + b_1 x[n-1] + b_2 x[n-2] \\\Leftrightarrow y[n] = b_0 x[n] + b_1 x[n-1] + b_2 x[n-2] - a_1 y[n-1] - a_2 y[n-2]\end{gather*}

This last equation is the difference equation which we can easily implement on our digital platform. Numerous methods exists on how to implement a filter. Four of these methods are closely related to each other. These are:

  • Direct form I
  • Direct form II
  • Transposed direct form I
  • Transposed direct form II

Direct form I

The direct form I is an FIR filter followed by an IIR filter. That is to say, it implements Y(z) followed by \frac{1}{X(z)}

(3)   \begin{equation*}y[n] = b_0 x[n] + b_1 x[n-1] + b_2 x[n - 2] - a_1 y[n-1] - a_2 y[n-2]\end{equation*}

In an algorithm you can implement it as:

yk = b0 * xn + b1 * x1 + b2 * x2 - a1 * y1 - a2 * y2
x2 = x1
x1 = xn
y2 = y1
y1 = yn

Where, x1, y1, x2 and y2 are the four state variables.

Direct form II

The direct form I is an IIR filter followed by an FIR filter. Which implements \frac{1}{X(z)} followed by Y(z).

Signal flow diagram of direct form II. Where s[n] a state variable

(4)   \begin{align*}s[n] &= x[n] - a_1 s[n-1] - a_2 s[n-2] \\y[n] &= b_0 s[n] + b_1 s[n-1] + b_2 s[n-2]\end{align*}

In an algorithm you can implement it as:

s0 =      xn - a1 * s1 - a2 * s2
yn = b0 * s0 + b1 * s1 + b2 * s2
s2 = s1
s1 = s0

Where, s0, s1, s2 are the three state variables.

Direct form I transposed

Both direct forms can be converted to an equivalent transposed form via:

  • Reverse direction of each interconnection
  • Reverse direction of each multiplier
  • Change junctions to adders and vice-versa
  • Interchange the input and output signals
Signal flow diagram of direct form I transposed.

(5)   \begin{align*}y[n] &= b_0 v[n] + b_1 v[n-1] + b_2 v[n-2] \\v[n] &= x[n] - a_1 v[n-1] - a_2 v[n-2] \\\end{align*}



In an algorithm you can implement it as:

vn = xn + s2
yn = s4 + b0 * vn
s4 = s3 + b1 * vn
s3 = b2 * vn
s2 = s1 - a1 * vn
s1 = - a2 * vn

Note that this form is very inefficient. You can easily shift the delays to the center, obtaining the direct form II.

Direct form II transposed

Signal flow diagram of direct form II transposed.

(6)   \begin{align*}s_1[n] &= b_2 x[n-1] - a_2 y[n-1] \\s_2[n] &= b_1 x[n-1]  - a_1 y[n-1] + s_1[n-1] \\y[n] &= b_0 x[n] + s_2[n]\end{align*}

In an algorithm you can implement it as:

yn = s2 + b0 * xn
s2 = s1 + b1 * xn - a1 * yn
s1 = b2 * xn - a2 * yn

Which tripod to buy?

In August I went on holiday to Sri Lanka. During this trip I wanted to do some landscape and long exposure photographing. A tripod is really necessary for the latter. Furthermore, it was also a good excuse to myself to justify the reason to buy a tripod. However, as with any product, there are various on the market and sometimes it is difficult to find the best buy. Each one of them have different features and specifications, such as, expensive versus cheap, light versus heavy, small versus big, et cetera. As with most of the products which I buy I tried to minimize the price but maximize the features. The two most important requirements which I wanted to optimize, besides the cost, were

  1. Minimize the weight of the tripod but ensuring the stability and endurance of the tripod
  2. Maximize the unfolded height of the tripod but minimize the folded height

Based on those criteria you are quickly directed to the tripods which are branded in the market as travel tripods. Below a table with the tripods which I took into consideration.

Manfrotto
BeFree
MeFoto Roadtrip
travel
Sirui
T-025X
Oben
CT-3535
Minimal height [cm] 34 0 10.5 21.8
Maximal height [cm] 144 162.5 147.5 131
Folded height [cm] 40 39 31 30.5
Weight [kg] 1.4 1.6 0.9 1.1
Maximum load [kg] 4 8 6 4
Material Aluminium Aluminium Carbon Carbon
Head mount? Y Y Y Y
Quick release? Y Y Y Y
Monopod? N Y N N

After balancing the advantages and disadvantages I decided to go for the MeFoto Roadtrip. For several reasons:

  1. For one it had good reviews on the internet, one particular which I liked was the review by Jason in the YouTube video below, which I also recommend too you to watch.
  2. The tripod was not too expensive compared to other tripods.
  3. It had the highest unfolded height
  4. Of all the tripods in the table above it is the heaviest, however, I considered 1.6 [kg] still as lightweight.
  5. Because I wasn’t planning to use the tripod on a daily bases I decided that an tripod of aluminium, instead of a carbon fiber one, would be durable enough
  6. The bar to which the camera is attached has a hook on the bottom to which extra weight can be added to increase the stability of the tripod. Along with the feature that the bar could be mounted upside down, which added to possibility to take photo’s directly at ground level. Were two features which I particularly liked.

Apart from the reasons above the MeFoto Roadtrip had also most of the features which other tripods featured, therefor the decision was made.

If you are ever looking for a reason to buy a tripod consider the following reasons:

  1. a tripod helps when using long exposures, for instance, you will need this when doing astrophotography.
  2. taking time lapses, panoramas, panning and videos become a lot easier since you have great support for your camera.
  3. when using a tripod you actually think a lot more about the framing of your shot, this is because everything takes a bit longer, setting up the tripod, adjusting, et cetera.
  4. self portraits are of a higher quality then when using a selfie stick.

Maintaining an overview of your financial expenses

At some point one you will want to keep track of your expenses to gain insight where you can save money. There are many websites and programs which offer insight into your expenses, for instance, YouNeedABudget, MoneyDashboard, GNUCash. However, for following three main reasons I decided to simply create my own expenses spreadsheet using Microsoft Excel.

  1. The programs often have a learning curve and you have to adapt to their way of working.
  2. I wanted to use a tool of which I can be almost certain that it will still exist in +20 years.
  3. Sometimes you will find yourself in the spot in which you want to have insight into some details. But, unfortunately these are not (directly) provided by the tool. Therefor I wanted to be able to program myself to gain insight. However, this still should be fairly simple to do.

Now let me tell you what my spreadsheet provides. Since I wanted to categorize my expenses I created one sheet with categories and subcategories. To keep track of all the transactions, a table was created in which all main transactions details are logged. The table has the following fields: bank account, type of account (saving or payment?), date, type  (income or expense?), amount, currency, kind (variabel or fixed?), category, subcategory and remarks. While there is a currency field, it is not used yet. The with main reason for this is because fortunately I do not have to deal with money different currency types. To gain insight into the data from the table, three pivot tables and charts were created. Using the pivot tables and charts, which are semi-interactive, one is able to, for instance, zoom-in in a particular category or time stamp. To finalize, I created a kind of dashboard in which all income and expenses data is summarized per month. First details are displayed about my balance of my payment and savings account. Followed by all the expenses details, sorted by category. Using the outline group functionality one is able to open the category and list the subcategories for that month. For each category and subcategory I show its portion (in percentage) with respect to all expenses, the median per month and the average per month. Using conditional formatting this data is also visualized to give a overview, such that you are able to identify the ratios and relationships quickly. sparklines are being used to show the trend of the expenses through out the year. Concluding, the dashboard sheet makes sure that all the data and details are visible at one glance but still manages to dive into details.

You can download an example with dummy data of the Excel sheet over here: Financien (example)

(Modified) PERT distribution

The United States navy developed in the 1950’s a program evaluation research task (PERT). It was designed to analyze the duration of a project and the tasks within the project. Each task in the project is given the following properties:

  • Name of the task
  • Predecessor, the list of tasks that have to be completed before the task can start.
  • The amount of time it will take for a task to be finished.

The duration of a task is often not fixed. It can vary between a minimum and maximum. The PERT distribution was created to give a good estimate of what the probability is of the duration of a task. It uses the same three parameters as the Triangular distribution, namely, the minimum (\text{min}), the most likely (\text{mode}) and the maximum (\text{max}). The probability density function (PDF) is given by

    \begin{equation*} f(x) = \frac{1}{B(\alpha_1,\alpha_2)}\frac{(x - \text{min})^{\alpha_1 - 1} (\text{max} - x)^{\alpha_2 - 1}}{(\text{max} - \text{min})^{\alpha_1 + \alpha_2 - 1}} \end{equation*}

where

    \begin{equation*} \alpha_1 = 6 \left( \frac{\mu - \text{min}}{\text{max} - \text{min}} \right), \quad \alpha_2 = 6 \left( \frac{\text{max} - \mu}{\text{max} - \text{min}} \right), \quad \end{equation*}

with

    \begin{equation*} \mu = \frac{\text{min} + 4\text{mode} + \text{max}}{6} \end{equation*}

being the mean.

An additional shape parameter \gamma might be added, in which case we are dealing with the modified PERT (MPERT) distribution. The parameter influences the peakness of the distribution. The only difference with PERT is the definition of \alpha_1 and \alpha_2. These are namely defined as:

    \begin{equation*} \alpha_1 = 1 + \gamma \left( \frac{\mu - \text{min}}{\text{max} - \text{min}} \right), \quad \alpha_2 = 1 + \gamma \left( \frac{\text{max} - \mu}{\text{max} - \text{min}} \right), \quad \end{equation*}

I created some Matlab code which includes functions for the PDF, the (inverse) cumulative distribution function (CDF). The inverse CDF is important for when you want to generate random numbers. The code can be download from here.

Finally, below is an interactive example of the PDF of PERT (blue) and MPERT (red) distribution.