Recursion is used when it makes the solution clearer. There’s no performance benefit to using recursion; in fact, loops are sometimes better for performance.

It is easy to accidentally write an endless loop

base case and recursive case

https://www.quora.com/What-is-the-difference-between-dynamic-programming-and-divide-and-conquer

find base case then find a way to help things get down to it

sub answers dont depend on each other

subproblems are independent

Merge Sort, Binary Sort, etc

inductive proof?

Recursion is like asking the person in the row in front of you what row they are in and just adding one. He asks the person in front of him and adds one, then tells you.

A recursive sub-function never solves the exact same problem as the original function, but always a simpler version of an original problem. At some point, this version becomes so simple that it can be solved easily and that is when a recursion ends. This leads us to the most important use case for recursion: use recursion to reduce the complexity of a problem at hand. The trick is to identify and solve the simpler problem, then express the problem in terms of that simpler case. Then you apply recursion until that case is reached and solved. The unknown number of nested loops is a common characteristic of all problems that are recursive in their nature and should give you a hint that recursive solution is required. ^{1}

Also, in Scheme and other functional languages, function calls in the tail position are compiled or otherwise treated as jumps. So in Scheme this will actually produce an infinite loop with no stack overflow:

```
(define (loop)
(loop))
(loop)
```

Most programming languages do not have this feature (tail call optimization)

Pass who to return the result to as an arg

From Here

“To iterate is human, to recurse divine.” -Peter Deutsch

Recursion is a natural idea. When humans perform repetitive tasks, we don’t assign state variables, and we generally don’t keep counters. We just keep doing the same thing over and over until we arrive at some kind of terminating condition. This “pattern” forms the basis of recursive programming.

In “The Little Schemer” Friedman and Felleisen lay out the following basic pattern for writing recursive functions.

Establish a terminating condition. Accumulate the results of the recursion. Change some argument in the recursive call to move closer to the terminating condition. This is a very natural, very basic way of going about writing recursive functions. Take for example a function that performs exponentiation. Here are the questions we might ask ourselves if we were to perform the computation.

Is the exponent 1? Then obviously the answer is the number itself.

Otherwise, the answer is the number times the number to the original power minus one.

In code, we have

```
(define (expt n p)
(if (equals? p 1)
n
(* n (expt n (- p 1)))
```

Obviously this is a basic example, but the concept isn’t difficult to grasp. Why then do students have such a hard time?

In my experience, effective recursive thinking requires two important mental processes come naturally. The first is understanding the process that recursive procedures generate. Continuing the example above, we need to be able to quickly understand that that procedure will generate a process that looks like:

```
(* n (expt n (p-1))
(* n (* n (expt n (p-1))) ;; etc...
;;; while an iterative process generates:
((*n n) (p-1))
```

at every step in its computation. This can be hard for students to understand as comparatively, an iterative process is much easier to understand at every “step” in a given computation. Recursive procedures in some sense need to be understood as a gestalt, thinking about them as a series of computations can be somewhat frustrating. This brings us to the next important shift in mental processes.

Thinking recursively requires a shift from thinking imperatively to thinking declaratively (albeit not a very big one.) We become concerned with the structure of the problem. Recursion forces this on us by forcing us to define the problem in terms of itself. This can be somewhat “shocking” at first, but becomes routine with practice.

Now that we’ve looked at why recursion might be difficult, we’re going to look at why none of these reasons is a reason to call recursion “hard.” IMHO, recursion is much more natural than iteration and ought to be taught first. While practicing programmers may not use recursion very often, understanding recursion is critical to understanding the nature of computation.

The main barrier to understanding recursion is understanding that most recursive functions follow a basic pattern (mentioned above.) I think that the lack of understanding in the student body concerning recursion is a pedagogical issue rather than a conceptual one.