Solving problems recursively
Recursion is a natural approach to implement an exhaustive search that considers, one step at a time, all possibilities for arriving at a solution. As such, recursion is a natural tool for solving search problems.
Here we introduce the basic concepts of recursive search by pursuing one case study. Search problems of this type are studied in more detail in, for example, the course CS-E3200, Discrete Models and Search.
The subset sum problem
As our case study, let us consider the following mathematical problem:
Note
Subset Sum. Given as input a set \(W = \{w_1,w_2,...,w_n\}\) of \(n\) integers (the steps) and an integer \(t\) (the target), is there a subset \(S \subseteq W\) with \(\sum_{z\in S}z = t\)?
Example 1. Suppose the steps are \(W = \{−9,−3,7,12\}\) and the target is \(t = 10\). Then, we have a solution \(S=\{-9,7,12\}\). Indeed, \(-9+7+12=10\).
Example 2. If the steps are \(W = \{−9,−3,7,12\}\) and the target is \(t = 6\), then there is no solution.
The examples above were of course very small and easily solvable even by hand calculation. So let us look at a somewhat larger example:
Example 3. Suppose that \(W = \{4,-5,9,17,20,11,-17,63,255,-97,66,42,-33\}\) and \(t = 27\).
This already looks like something that will take some effort to solve manually, so let us rather automate the process of searching for a solution.
Clearly, to find a solution to a given instance \((W,t)\), it suffices to exhaustively search through all possible subsets of \(W\) for a subset \(S\subseteq W\) with \(\sum_{z\in S}z = t\). It turns out that recursion is a very natural programming strategy to implement such an exhaustive search.
Recursive search for a solution
Suppose we are searching for a solution to an instance consisting of the steps \(W=\{w_1,w_2,\ldots,w_n\}\) and the target \(t\). For \(n=0\), there is a solution (the empty set) if \(t=0\); otherwise there is no solution. For \(n\geq 1\), take any step, say, take \(w_n\in W\). We now observe that if there is a solution \(S\), then either
\(w_n\) does not occur in \(S\) and the set \(W \setminus \{w_n\} = \{w_1,w_2,\ldots,w_{n-1}\}\) has a subset that sums to \(t\), or
\(w_n\) does occur in \(S\) and the set \(W \setminus \{w_n\} = \{w_1,w_2,\ldots,w_{n-1}\}\) has a subset that sums to \(t - w_n\).
That is, we can partition the original problem instance (of size \(n\geq 1\)) into two new instances (of size \(n-1\)). The original has a solution if either (or both) of the new ones has a solution. If neither of the new instances has a solution, then the original does not have a solution either.
We do not know which case occurs, but we can try out both. Of course we do not know in advance which of the new instances (if any) has a solution, so we will exhaustively search both new instances. These new instances will be then solved, in turn, recursively, in the very same way; for example, \(W \setminus \{w_n\} = \{w_1,w_2,\ldots,w_{n-1}\}\) has a subset \(S\) that sums to \(t - w_n\) if either
\(w_{n-1}\) does not occur in \(S\) and the set \(W \setminus \{w_{n-1},w_n\} = \{w_1,w_2,\ldots,w_{n-2}\}\) has a subset that sums to \(t-w_n\), or
\(w_{n-1}\) does occur in \(S\) and the set \(W \setminus \{w_{n-1},w_n\} = \{w_1,w_2,\ldots,w_{n-2}\}\) has a subset that sums to \(t - w_{n-1} - w_n\).
And so on, recursively, until we are at a base case with \(n=0\), and it is trivial to decide whether a solution exists.
An example search
Let us illustrate this recursive search on our small example instance \(W = \{−9,−3,7,12\}\) and \(t = 10\). We traverse the instances, one by one, exhaustively covering all possibilities:
Can a subset of \(\{−9,−3,7,12\}\) sum to \(10\)?
Can a subset of \(\{−3,7,12\}\) sum to \(10\)?
Can a subset of \(\{7,12\}\) sum to \(10\)?
Can a subset of \(\{12\}\) sum to \(10\)?
Can a subset of \(\{\}\) sum to \(10\)? No
Can a subset of \(\{\}\) sum to \(10-12=-2\)? No
Neither branch (i.e., new instance) has a solution \(\to\) No
Can a subset of \(\{12\}\) sum to \(10-7 = 3\)?
Can a subset of \(\{\}\) sum to \(3\)? No
Can a subset of \(\{\}\) sum to \(3-12=-9\)? No
Neither branch (i.e., new instance) has a solution \(\to\) No
Neither branch (i.e., new instance) has a solution \(\to\) No
Can a subset of \(\{7,12\}\) sum to \(10-(-3) = 13\)?
[Many lines omitted …]
Neither branch (i.e., new instance) has a solution \(\to\) No
Can a subset of \(\{−3,7,12\}\) sum to \(10-(-9) = 19\)?
Can a subset of \(\{7,12\}\) sum to \(19\)?
Can a subset of \(\{12\}\) sum to \(19\)?
[Few lines omitted …]
Neither branch (i.e., new instance) has a solution \(\to\) No
Can a subset of \(\{12\}\) sum to \(19-7=12\)?
Can a subset of \(\{\}\) sum to \(12\)? No
Can a subset of \(\{\}\) sum to \(12-12=0\)? Yes
The branch in which we included \(12\) has a solution \(\to\) Yes, {12}
The branch in which we included \(7\) has a solution \(\to\) Yes, {7,12}
The branch in which we omitted \(-3\) has a solution \(\to\) Yes, {7,12}
The branch in which we included \(-9\) has a solution \(\to\) Yes, {-9,7,12}
Yes, a subset that sums to \(10\) is \(\{-9,7,12\}\)
Simple, but tedious. Therefore, clearly something that should be executed by a computer.
Implementation in Scala
Let us implement this procedure as follows.
Again, we use an inner function so that the elementSelector
function,
which selects the next element in the current subset,
does not need to be passed as an argument in every recursive call.
/** Select an arbitrary element in w */
def selectElementSimple(w: Set[Int], t: Int) = {
require(!w.isEmpty)
w.head
}
/**
* Solve the subset sum problem with recursion.
* The argument function heuristics is a function that, when called with
* a non-empty set w and value t, returns an element in w.
*/
def solve(steps: Set[Int], target: Int, elementSelector: (Set[Int], Int) => Int = selectElementSimple): Option[Set[Int]] = {
def inner(w: Set[Int], t: Int): Option[Set[Int]] = {
if (t == 0)
// An empty set sums up to t when t = 0
return Some(Set[Int]())
else if (w.isEmpty)
// An empty set cannot sum up to t when t != 0
return None
// Select one element in the set
val e = elementSelector(w, t)
val rest = w - e
// Search for a solution without e
val solNotIn = inner(rest, t)
if (solNotIn.nonEmpty) return solNotIn
// Search for a solution with e
val solIn = inner(rest, t - e)
if (solIn.nonEmpty) return Some(solIn.get + e)
// No solution found here, backtrack
return None
}
inner(steps, target)
}
Now we have, as expected, the following:
scala> solve(Set(-9,-3,7,12), 10)
res: Option[Set[Int]] = Some(Set(12, 7, -9))
scala> solve(Set(-9,-3,7,12), 6)
res: Option[Set[Int]] = None
Here is an illustration of the tree of recursive function calls for the case \(W = \{-9,-3,7,12\}\) and \(t = 10\). Observe that the calls proceed from top to bottom, left branch first. The red nodes indicate that it was detected that the current set \(W\) cannot sum to the current target \(t\). The green color indicates that the current target value \(t\) is 0 and thus a solution was found. The nodes in gray depict calls that we never made because a solution was found earlier.
When we consider the same set of steps \(W = \{-9,-3,7,12\}\) but with a different target \(t = 6\), there is no solution and the algorithm exhaustively searches through all the possible subsets:
We observe that when an instance has no solution,
this algorithm makes \(2^{|W|+1}-1\) calls to the function inner
.
Thus, for such instances, the algorithm runs in time \(\Omega(2^n)\),
where \(n\) is the number of steps in \(W\).
(For instances that do have a solution, the algorithm can of course
be very lucky and find the solution with very few lucky guesses.)
Pruning the recursion tree
We now illustrate how early detection that the current instance does not have a solution can help in pruning the number of recursive calls the algorithm makes.
Let us take a closer look at the large search tree for the instance \(W = \{-9,-3,7,12\}\) and \(t = 6\) above. In some nodes it is easy to see that the current instance does not have a solution. For example, when \(W'=\{12\}\) and \(t'=-1\), we see immediately that there are no negative numbers and thus there is no way to reach the negative target \(-1\). Similarly, when \(W'=\{12\}\) and \(t'=15\), the positive number(s) in \(W'\) cannot sum to \(15\) even if we take all of them.
To translate these observations into general rules:
If all the positive numbers do not suffice to reach the positive target,
there is no solution. Conversely, if all the negative numbers do not
suffice to reach the negative target, there is no solution.
We can implement these rules in our method solve
by adding this case
right after the case “else if (w.isEmpty) ...
”:
else if (w.filter(_ > 0).sum < t || w.filter(_ < 0).sum > t)
// The positive (negative) number cannot add up (down) to t
return None
We rerun on the same instance, and much to our satisfaction get a smaller search tree:
Of course, these additional rules can also help when a solution exists. Indeed, here is the new smaller search tree for the case we considered earlier:
Note
As general principle, such pruning rules can be used to reduce the number of recursive calls that a search algorithm makes.
However, the rules do not come for free. That is, it takes time to evaluate the rules, and if this investment in time does not lead to savings elsewhere (less time spent searching), we end up with an algorithm that as a whole runs slower than the algorithm without pruning rules.
A good strategy is to use expensive rules near the top of the tree (where savings in pruning can be substantial if large subtrees can be eliminated) and use only cheap rules, or no pruning at all, at the lower parts of the tree (where speed is of essence since there typically are far more nodes at lower levels than higher levels in the tree).
Branching heuristics
Another common design consideration with recursive search is how to branch. For example, with Subset Sum, we can take any element from \(W\) and branch by (i) not including, and (ii) including that element. Since we have this freedom to choose any element we like, it is a good idea to put a little thought into how we choose our element and how this choice affects performance.
Indeed, rather than choosing an arbitrary element, perhaps one could choose more wisely. For example, we could choose an element that greedily moves to the direction of the target value \(t\): if \(t\) is positive, then choose the maximum value in \(W\), otherwise choose the minimum in \(W\).
This greedy choice in particular appears to be compatible with our earlier pruning rules; perhaps more pruning results if we greedily choose the element on which we branch?
Here is an implementation for greedy selection:
/** Select an element in w in a greedy way */
def selectElementGreedy(w: Set[Int], t: Int) = {
require(!w.isEmpty)
if (t > 0) w.max
else w.min
}
Let us run the new version of the algorithm with our old example with \(t=6\) (no solution):
In this particular case it appears we were lucky and greedy choice does in fact help. To systematically explore whether (or when) one selection strategy is better than another, however, one should conduct a far more extensive set of experiments on a large number of instances. Furthermore, one should try to identify (by reasoning and mathematical analysis) what are the worst case instances for a particular selection strategy. For example, maybe being greedy is, for some instances, a horribly bad idea?
Note
Observe that we did not provide any sound mathematical reasoning to justify that greedy selection would be always good or optimal. More generally, functions that decide how we branch a recursive search are called branching heuristics (“rules of thumb”) for this very reason that their general performance is difficult to analyse. (Indeed, if we completely understood how the search will behave before we executed it, then we would not need to search in the first place, we would just directly aim for the solution!) In fact, in many cases the task of finding the provably best element to branch on is as difficult problem as the whole original search problem.
Too complicated, and for no reason?
Observe that the scala.collection.immutable.Set class has the subsets
method returning an iterator that can then be used to enumerate through all the subsets of \(S\) and test if one of them sums up to \(t\).
Using this method, we can write a very concise program for solving subset sum:
def solve(s: Set[Int], target: Int): Option[Set[Int]] =
s.subsets.find(subset => (subset.sum == target))
So why did we go through all the pain with recursion, given that such a wonderfully concise solution is available? (Apart, of course, from the fact that we are studying recursion and its applications.)
Observe that the find
method of the iterator returned by the subsets
method goes through all the subsets blindly until a solution (if one exists) is found. In contrast, as our latest search tree above shows, already our rather rudimentary pruning rule allows the recursive method to skip many subsets from consideration.
else if (w.filter(_ > 0).sum < t || w.filter(_ < 0).sum > t)
// The positive (negative) number cannot add up (down) to t
return None
Furthermore, recursive search is a programming pattern that, once mastered, enables you to attack many other problems far more efficiently than merely iterating over all subsets, tuples, or permutations, and for each checking whether you have accidentally stumbled on a solution.
A word on hard problems and NP-completeness
Although the pruning technique and the branching heuristics described above allow the recursive method to avoid some candidates for a solution, they do not guarantee that the run-time of the algorithm would be small (or even reasonable) for all instances.
When we increase the instance size, the algorithm takes longer and longer to run. For example, you may want to try out what happens when
or when
(In case you get tired of waiting, remember that you can interrupt the current execution in the console with Ctrl+C, or simply shut it down.)
Of course, our small program is not completely bad, it can solve some very large instances very fast: for example, when \(t > \sum_{z \in W}z\), the pruning technique discovers immediately that there is no solution.
So what gives, why did we put all this effort into studying a bad algorithm? Surely a better, more clever algorithm could solve any instance of subset sum that you can, say, comfortably print on a few pieces of paper, faster than you can blink your eye?
Currently, no one knows whether this is possible. No such algorithm is known.
NP-completeness. As innocent as the problem statement may appear, the subset sum problem is (when we allow an arbitrary number of arbitrarily large integers in the instances) one of the so-called NP-complete problems. These problems have in common the following features:
given a solution \(S\), it is easy to verify that \(S\) really is a solution,
we do not know any method that would always (for an arbitrary instance given as input) solve the problem efficiently, which here means “in time that grows polynomially in the size of the instance [which is measured in bits under some very reasonable assumptions] for all instances”, and
if we could solve any one of these problems efficiently, we could solve all of them efficiently.
A large number of NP-complete problems have been identified and extensively studied, yet no efficient algorithm (for any of the problems) has been found, and neither has a mathematical proof been found that would show that NP-complete problems do not admit efficient algorithms. Bluntly, we simply do not know, despite decades of effort, and despite the fact that a great number of problems of substantial practical relevance are NP-complete.
Hardness and undecidability. Beyond NP-completeness, there are problems that are even more difficult to solve by computer (which can be established by mathematical proof!) and also many problems that provably cannot be solved by computers! (More on these topics in the courses CS-C2160 Theory of Computation and CS-E4530 Computational Complexity Theory.)