||The tutorial describes a top down merge sort that recursively divides a data set in half. Assuming a non-hybrid top down merge sort, no merging takes place until recursion produces the first instance of two sub data sets of size 1. The merging process then follows the recursion path, depth first, left first.
Not mentioned in the tutorial is bottom up (iterative) merge sort, which skips all of the recursion, and simply considers a data set of n elements as n runs of size 1, and begins by merging runs, until the merging produces a single sorted run, which will be the sorted data set. Normally the merging is done left to right, doubling the size of runs on each pass. Variations of iterative merge sort are how most library versions of merge sort are implemented (like timsort).
Despite favorable cache locality for a few of the levels of recursion versus iterative passes, iterative merge sort tends to be a bit faster than top down, but not by much since most of the time is spent merging versus generation of indices (or pointers) via recursion versus iteration, and the merging can be identical for top down and bottom up merge sort.
Legacy versions of merge sort dating back to the days of tape drives are iterative merge sort or a variation called polyphase merge sort. I'm not sure when or why recursive top down sort became popular.
Wikipedia's article on merge sort includes simple (non optimal) examples of recursive and iterative merge sort for arrays as well as linked lists.