# Recursion Tree Method to Solve Recurrences

Hi, I am Saurav, today I’m writing about how to solve recurrence relation using Recursion Tree Method.

In this method, we will try to convert recurrence into a tree and then we sum the costs of all the levels of the tree. This will be clear from the example given below.

Although this method uses the term “tree”, you’ll still be able to understand this chapter even without the knowledge of trees.

Let’s take a recurrence equation:

`FOO1(A, left, right)`

if left < right

mid = floor((left+right)/2)

FOO1(A, left, mid)

FOO1(a, mid+1, right)

FOO2(A, left, mid, right)

T(n) = cn + 2T(n/2)

Here, T(n) is the running time of the entire algorithm and this running time is broken into two terms — cn and 2T(n/2). The statements for checking the condition `(if left < right)`

and calculating the middle element `(mid = floor((left+right)/2))`

are taking a constant time. However, the time taken by the `FOO2(A, left, mid, right)`

function is linear i.e., Θ(n) (as stated in the previous chapter) and thus represented by cn, where c is a constant and n is the size of the array A. And then the problem is further broken down into two subproblems of T(n/2) ie. `FOO1(A, left, mid)`

and `FOO1(a, mid+1, right)`

.

Here, we have represented T(n) as two subproblems and the cost cn which is the total cost except for these two subproblems.

Similarly, we can further break T(n/2) as shown in the picture given below.

From the above picture, we can see that the cost of the top level is cn. Also, the next level has a cost of cn `(cn/2+cn/2)`

. This is also true for the next level i.e., its cost is also `(cn/4+cn/4+cn/4+cn/4), `

i.e., `cn`

.

To generalize, we can also say that at level i, there are 2i nodes and each of them is making a contribution of cn/2i to the cost and thus a total of (2i)x(cn/2i)=cn.

Now, we know that each level is taking cn time and if we calculate the total number of levels then we can easily know the total time taken by the algorithm.

From the previous chapter, we know that our base case is when the `left`

is equal to the `right`

. It means that we will stop breaking our problem into subproblems when the size will become 1, so there will be n (size of the problem) such subproblems which won't be further broken down into subproblems.

Since the size of our problem is n, so there will be n nodes without further breaking down into subproblems (or leaves). If the last level of the tree is level i, then

2i=n

=>lg(n)=i

Thus, the total number of levels are 1+lg(n) (counting of the levels is starting from 0).

Now, we know that there are a total of 1+lg(n) levels and each level is making a cost contribution of cn, so the total time should be

(l+log(n))cn =c(nolg(n)+cn)

By ignoring the lower order terms and the constant, we can get the order of the growth to be of **Θ(nlog(n))**.

Let’s take a look at one more example.

T(n)=3T(n/4+cn/2)

Here, T(n) is made up of two terms — cn/2 and 3T(n/4). Similar to the previous example, cn/2 simply represents the cost, and 3T(n/4) can be viewed as breaking of the main problem into 3 subproblems with each of size n/4.

We can further break it as

By looking at the pattern, we can say that any level i has 3i nodes, and each node is making a contribution of c(n/4i)² and thus a total cost of

3i x (cn²/16)i=(3/16)i cn².

Now, we know the contribution of each level, so the calculation of the total number of levels will lead us to the total cost of the algorithm.

We know that there won’t be any division of the problem at the last level, so the size of each subproblem at the last level will be 1.

Now, we know that the last level is log4n and the number of nodes at any level is 3i, so the number of nodes at the last level can be written as 3^log4n=nlog43.

So at the last level, there are nlog43 nodes and each is making a contribution of T(1). Thus, the total contribution of the last level =

So, the total running time of the algorithm T(n) can be written as

The above equation is a GP (geometric progression) with the first term (a) = cn² and the common ratio (r) = 3/16

One thing we can do here is:

We have converted our problem from a finite GP to an infinite one and we can still get the upper bound (O), if not the tight bound (Θ). Using the summation formula of an infinite GP.

Let’s look at one more example

Here, the problem is divided into two subproblems of size n/3 and 2n/3. Since the size of the problem on the left side is smaller than that of the right side, so the breaking into subproblems of the left side will finish earlier than that of the right one. Thus, we can focus on the rightmost subproblem at each level to get the upper bound of the algorithm because these subproblems are going to last longer.

One thing to be noticed from the above picture is that initially, the total cost of each level is cn but as the subproblems on the left side started finishing, it is becoming less than cn. So, if we make our calculations using cn, then the real-time taken by the algorithm will be of course lesser but it will be an upper bound of our algorithm.

Thus, we have decided that we are going to calculate the upper bound by considering the cost of each level to be cn and considering only the longest path i.e., the number of levels (or nodes) in the longest path.

Looking at the longest path, we can see that the size of the subproblems is decreasing as

So, at any level i, the size of the subproblem can be written as (2/3)^i x n. Thus at the last level.

Now, we know that there are 1+log3/2 n levels in the longest path and each level will take at most cn time i.e.,

Ignoring the lower order and constant terms, T(n)=O(nlogn)

Till now, we have studied two methods to solve a recurrence equation. The third and last method which we are going to learn is the Master’s Method. This makes the analysis of an algorithm much easier and directly gives us the result for the 3 most common cases of recurrence equations. So, let’s visit the next chapter and learn about the Master’s Method.

Don’t watch the clock; do what it does. Keep going. — Sam Levenson