¿Qué es la programación recursiva y cómo funciona? Las matemáticas detrás de la tecnología

What is recursive programming and how does it work? The math behind the technology

What is recursion in programming or software development?

If you've ever seen a Russian doll, you've experienced a visual example of recursion: an object that contains smaller versions of itself. In programming, recursion is a powerful technique that allows a function to call itself to solve complex problems in simple ways.

Recursive programming is an algorithmic technique used in software engineering to solve problems. This technique is designed to solve problems in which the order of complexity depends on the size of the problem . It is especially oriented to the development of functions in which computing times are high.

Examples:

  • Searching for an item in a list. Using recursion to search for an item in a list linearly is not efficient and takes up more memory space. An iterative search using a loop is much more direct and faster.
  • Calculating the factorial of a number. This problem, by its own mathematical definition, has a natural recursive mathematical definition, so its recursive implementation is direct and much more efficient.

Recursive programming is based on a technique in which a function calls itself to solve a problem. Instead of tackling the entire problem at once, the function breaks the problem into smaller subproblems and handles each of them recursively. This process continues until a base case or termination condition is reached, which stops the recursive calls and prevents it from looping indefinitely.

In each recursive call, the function works on a simplified or reduced version of the original problem. By solving this smaller subproblem, the program accumulates partial solutions that are eventually combined to solve the entire problem. In this way, recursion allows complex problems to be broken down into simpler steps, which the function can solve step by step automatically.

It is important for every recursive algorithm to clearly define its base condition, as without it, the function would keep calling itself indefinitely, resulting in a stack overflow error. Recursion is therefore a very powerful technique, but it requires care in its implementation to be efficient.

Recursive programming vs. iterative programming

  • In recursive programming, a function calls itself, directly or indirectly, to solve a problem. A problem is broken down into smaller subproblems, solving each one until a base case is reached.
  • In iterative programming, control structures such as loops (for, while) are used to repeat a sequence of instructions a defined number of times or until a condition is met.

Using iterative or recursive programming techniques will depend on the nature and complexity of the problem, and above all, the time required to solve it, that is, its order of complexity.

In general, iterative programming is more intuitive and easier to execute, it is the one that is usually used by default and the one that programmers tend to have more affinity with.

As a general rule, recursive programming is used when the execution time grows exponentially with the size of the problem, rather than linearly, since for a very large problem the function could take too long to finish or go to infinity. It is also used when the execution time to solve a problem recursively is much lower than the iterative version or when its consumption of resources, such as RAM, is lower.

As we have seen so far, recursion is a powerful tool for solving complex problems, however, in some cases, it can lead to exponential execution times if not used efficiently. Using techniques such as memoization can optimize recursion to avoid unnecessarily slow behavior.

The order of complexity

The concept of order complexity is a measure that describes the behavior of the time or space that an algorithm needs to run based on the size of its input. It is used to analyze and compare the efficiency of different algorithms, which is essential when dealing with problems involving large amounts of data. There are two types of complexity: temporal and spatial.

  • Time complexity: Refers to the time it takes for an algorithm to execute as the size of the problem increases.
  • Space complexity: Refers to the amount of memory an algorithm requires as the problem size increases.

To express complexity, Big O notation (O(n)) is used, which provides a way to represent the asymptotic behavior of an algorithm, that is, how it behaves when the problem size tends to be large. Big O notation describes the worst-case scenario for the time or space required by an algorithm, based on an input of size n.

Examples of common complexity orders

Ordered from highest to lowest efficiency:

O(1) - Constant time

  • The execution time does not depend on the size of the input.
  • Example: Access an element in a list or array by its index.
  • Whether the list has 10 or 1 million items, access is immediate.

O(log n) - Logarithmic time:

  • The running time increases logarithmically as the input size grows. This type of complexity is typical of algorithms that divide the problem into halves at each step.
  • Example: Binary search.
  • Here, at each step we halve the size of the list, so the number of operations grows logarithmically with respect to the size of the list.

O(n) - Linear time:

  • The execution time increases proportionally to the size of the input.
  • Example: Traverse a list to find an element.
  • Here, if there are 1,000 items, we will have to check all 1,000; if there are 1 million, the time increases proportionally.

O(n log n) - Nearly linear time:

  • This type of complexity commonly occurs in efficient sorting algorithms, such as Merge Sort or Quick Sort.
  • Example: Efficient sorting algorithm.
  • In this case, the list is repeatedly split into smaller parts (log n splits), and each split involves traversing the entire list (n).

O(n²) - Quadratic time:

  • The execution time grows proportionally to the square of the input size. This is typical for algorithms that require nested comparisons or iterations.
  • Example: Selection or bubble sort algorithm.
  • If the list has 1,000 elements, the algorithm does 1,000² operations (1 million comparisons).

O(2^n) - Exponential time:

  • The execution time doubles with each increase in the input size. Algorithms with this complexity become inefficient very quickly.
  • Example: Calculating the Fibonacci series without optimization.

O(n!) - Factorial time:

  • The execution time grows factorially with the input size. It is extremely inefficient and occurs in combinatorial problems.
  • Example: Algorithm that generates all possible permutations of a set.

Why is it important to understand the order of complexity?

The complexity order helps you predict how your algorithm will behave when applied to large data sets. An algorithm with a complexity of O(n) or O(log n) can run efficiently even on millions of data, while one with complexity of O(n²) or worse can become useless with large inputs.

For example, if you have a list of 10 million items, an O(n) algorithm like linear search may be manageable, but an O(n²) algorithm may take a very long time to complete. Choosing the right algorithm is crucial to maintaining acceptable performance.

Types of recursive algorithms

Recursion types are distinguished by the way problems are divided and the number of recursive calls that are made. Choosing the right type of recursion for a specific problem is key to improving algorithm efficiency and performance. Memory optimization and runtime are important factors to consider when deciding on the best recursive approach to solve a given problem.

1. Tail recursion

In tail recursion, the recursive call is the last operation that the function performs before returning the result. This means that there are no pending operations after the recursive call. This type of recursion is efficient in terms of memory usage as it does not require storing multiple calls in the execution stack. Some programming languages ​​optimize this type of recursion to avoid excessive memory consumption.

  • Advantage: Allows memory optimization in languages ​​that support tail recursion optimization.
  • Disadvantage: It is not always easy to convert an algorithm to tail recursion.

2. Non-tail Recursion

In non-tail recursion, the recursive call is not the last instruction to be executed. After the recursive call, there are pending operations that need to be performed, such as multiplications or additions. This type of recursion requires more memory, as the system needs to store the state of each recursive call until all calls are completed.

  • Advantage: It is more flexible and can be applied to a wide variety of problems.
  • Disadvantage: It consumes more memory, since it must store information on the stack for each recursive call.

3. Binary recursion or recursive tree

In binary recursion, a function makes more than one recursive call at each step. This results in a tree structure, where each node gives rise to two (or more) subproblems. This type of recursion is common in problems such as calculating the Fibonacci series or in traversing hierarchical data structures such as trees.

  • Advantage: It adapts well to problems that naturally divide into two subproblems.
  • Disadvantage: It generates many recursive calls, which can increase execution time if not optimized.

4. Multiple Recursion

Multiple recursion is a generalization of binary recursion, where a function makes more than two recursive calls. This type of recursion is applied in problems where multiple subproblems are required to solve the original problem, such as generating all permutations of a set of elements.

  • Advantage: Useful for problems that require exploring multiple solutions or subproblems simultaneously.
  • Disadvantage: The complexity and number of recursive calls can grow exponentially, which can make the algorithm inefficient.

5. Nested recursion

In nested recursion, a recursive call is made inside another recursive call. This means that the result of one recursive call is needed to compute the next call. This type of recursion can be more complex to understand and debug.

  • Advantage: Useful in problems where the intermediate results of one recursive call are used immediately in the next.
  • Disadvantage: It is harder to follow and can be confusing as there are multiple layers of recursive calls.

6. Divide and conquer recursion

In divide-and-conquer recursion, the large problem is divided into several smaller subproblems, which are solved recursively. The solutions of these subproblems are then combined to form the solution of the original problem. This approach is used in algorithms such as merge sort and binary search.

  • Advantage: It is very efficient when the problem can be divided into smaller parts in a balanced way.
  • Disadvantage: If the problem cannot be divided efficiently, recursion may become inefficient.

7. Mutual Recursion

In mutual recursion, two or more functions call each other, forming a cycle of recursive calls between them. This is used when the problem requires multiple functions to call each other to solve dependent subproblems.

  • Advantage: Useful in cases where the program flow depends on several interconnected processes.
  • Disadvantage: It is difficult to follow and debug, as it can be difficult to understand the flow of control between functions that call each other.

The principle of induction

The relationship between recursion and the principle of induction is fundamental, since both are based on the idea of ​​solving a problem by dividing it into smaller subproblems, and both depend on a “base case” that guarantees that the process terminates. It is common practice when designing a recursive solution to validate whether it complies with the principle of induction in order to determine whether the approach is correct or not.

Let's see the relationship between both concepts from the definitions of both:

What is mathematical induction?

Mathematical induction or the principle of induction is a method of proof used to show that a proposition is true for all natural numbers (or non-negative integers). The principle of induction is based on two fundamental steps:

  • Base case: The proposition is shown to be true for an initial value, usually n = 0 or n = 1
  • Inductive step: It is assumed that the proposition is true for an arbitrary number n (inductive hypothesis), and then it is shown that, under that assumption, it is also true for n + 1

This guarantees that if the proposition is true for the base case and the inductive step is correct, the proposition is true for all natural numbers.

What is recursion?

As we have seen in this article already, recursion is a technique in programming and mathematics where a function or algorithm calls itself to solve smaller subproblems of the original problem. As in induction, recursion has two main components:

  • Base case: A condition that stops recursive calls and returns a direct result (i.e., the simplest solution to the problem).
  • Recursive case: The function calls itself with a smaller subset of the original problem, and uses the results of these recursive calls to construct the final solution.

Relationship between recursion and induction

The key connection between recursion and induction is that they both follow a similar structure in their problem solving. In fact, a recursive algorithm can be proven correct using mathematical induction. Let's see how:

Step 1: Base case

In recursion, the base case defines when the algorithm stops calling itself and solves the problem directly. This is the equivalent of the base case in mathematical induction, where the proposition is verified to be true for an initial value, such as n = 0.

Step 2: Recursive step and inductive step

In the recursive case, the function calls itself with a smaller version of the problem, until it eventually reaches the base case. This process is similar to the inductive step in mathematical induction, where we assume that the proposition is true for an arbitrary number n, and then prove that it is also true for n + 1.

In a recursive algorithm, it is assumed that the function correctly solves the subproblem (inductive hypothesis), and with this assumption, we construct the solution to the larger problem (equivalent to the inductive step in induction).

Example 1: Factorial

The calculation of the factorial of a number n is a classic example of recursion and can be proven using mathematical induction.

1. Recursive definition of the factorial:

n! =
\begin{cases}
1 & \text{if } n = 0 \\
n \times (n - 1)! & \text{if } n > 0
\end{cases}

2. Proof by induction:

  • Base case: For n = 0 , we have 0! = 1 , which is true by definition.
  • Inductive step: Suppose that n! is true for a number n = k (inductive hypothesis), that is, k! = k x (k - 1)!

We now need to show that it is also true for n = k + 1 using the recursive definition of the factorial:

(k + 1)! = (k + 1) x k!

By the inductive hypothesis, we know that k! = k x (k - 1)! , so:

(k + 1)! = (k + 1) x k!

This completes the inductive step, proving that the recursive definition of the factorial is correct for all n .

In this example, the recursion of the factorial follows exactly the same structure as a proof by mathematical induction: the base case ensures that the process stops, and the recursive (inductive) step ensures that we can build larger solutions from smaller ones.

Example 2: Fibonacci series

The calculation of the Fibonacci series also follows this same relationship between recursion and induction:

1. Recursive definition of Fibonacci:

F(n) =
\begin{cases}
0 & \text{if } n = 0 \\
1 & \text{if } n = 1 \\
F(n - 1) + F(n - 2) & \text{if } n > 1
\end{cases}

2. Proof by induction:

  • Base case: For n = 0 and n = 1 , F(0) = 0 and F(1) = 1 , which is true by definition.
  • Inductive step: Suppose that F(n) is correct for n = k and n = k - 1 . We need to show that it is also correct for n = k + 1 .
  • Using the recursive definition of Fibonacci:


F(k + 1) = F(k) + F(k - 1)

We know, by the inductive hypothesis, that F(k) and F(k - 1) are correct, so F(k + 1) is also correct.

This is another example where recursion follows an inductive principle.

Examples of complex problems solved by recursion

1. Divide and conquer

A classic example of divide and conquer is the Merge Sort algorithm, which is used to sort lists.

Merge Sort Steps:

1. Split: If the list has more than one element, it is split into two halves.
2. Solve: The same process is applied recursively to each half until each sublist has a single element (already sorted).
3. Merge: The ordered sublists are combined to form a single ordered list.

Example:

For list [38, 27, 43, 3, 9, 82, 10]:

1. Split: Splits into [38, 27, 43] and [3, 9, 82, 10], and then into smaller lists down to single-element lists.
2. Solve: One-element lists are now sorted.
3. Merge: The sorted sublists are merged in order until [3, 9, 10, 27, 38, 43, 82] is obtained.

The complexity of Merge Sort is O(n log n), since it divides logarithmically and merges in linear time.

2. Optimization problems

A classic example of an optimization problem is the Knapsack Problem, where the goal is to maximize the total value of items that can be included in a backpack, without exceeding its weight capacity.

Problem Description:

Given a set of objects, each with a weight and a value, the problem is to determine the combination of objects that maximizes the total value in a backpack with a limited weight capacity.

  • Entrance:
    • A backpack capacity, W (the maximum weight it can carry).
    • A list of objects, each containing:
    • One weight, wi.
    • A value, I saw.
  • Objective: Maximize the total value of the selected items without exceeding the capacity of the backpack.


Types of backpack problems:

  1. Backpack 0/1: For each item, you can either include it completely in the backpack or not include it at all. You cannot split items.
  2. Fractional Backpack: You are allowed to fractionate objects, that is, you can take a part of an object.


Example:

Suppose you have a backpack with a capacity of 15 kg and the following items:

  • Object 1: Weight = 10 kg, Value = 60.
  • Object 2: Weight = 5 kg, Value = 40.
  • Object 3: Weight = 3 kg, Value = 30.


Question: What items should you include in your backpack to maximize the total value, without exceeding the 15kg limit?

Solution 0/1 (without fractionation):

If you select objects 1 and 2, the total weight will be 15 kg and the value will be 60 + 40 = 100 , which is the optimal solution for this case.

Methods to solve the Backpack Problem:

  1. Dynamic Programming Algorithms (for the 0/1 case): It is an efficient approach to solve this problem exactly, using a table that stores the results of subproblems.
  2. Greedy algorithms (for fractional case): Selects objects based on value per unit weight, which is optimal when fractionalizing of objects is allowed.


Importance:

This problem is fundamental in combinatorial optimization, and its applications include resource allocation, financial planning, and other areas where profit maximization is sought subject to constraints.

And that's it for the introduction to recursive programming.

We hope you liked it and don't forget to follow us on LinkedIN, where we announce each new article, and on the rest of the social networks where we will start to upload content about science but different.

https://www.linkedin.com/company/the-science-driven-company/

https://www.instagram.com/science.driven/

https://www.tiktok.com/@science.driven

https://www.youtube.com/@ScienceDriven

Back to blog

Leave a comment