Exclusive Article: Evolution of Algorithms

The Evolution of Algorithms

What is an algorithm?

An algorithm is a set of rules for solving some problem, which usually means that it can be expressed in the form of a mathematical formula. The word can also refer to the procedure followed by the computer when solving a problem (computer algorithm). Algorithms describe how to find solutions to problems with the help of computers.

The idea of algorithms appeared in the work of Georg Friedrich Bernhard Riemann, Carl Friedrich Gauss and others. The theoretical foundations were laid by David Hilbert, who showed that it is possible to solve any problem if it is formulated as a certain type of mathematical problem (for example finding square root). So even if we are facing a complicated real-life problem, such as finding the best strategy for winning chess game or traveling between two points on Earth, we can use an algorithm to solve it.

In the 19th and 20th centuries the notion of algorithm became closely connected with digital computers, which are designed to follow these procedures step by step. This is called a computer program or simply a program.

The study of algorithms is called algorizmica (or informatics) and the branch of computer science dealing with writing and using algorithms, is called algorithmics.

What are the Basic Algorithms in Computer Science?

Basic algorithms are used in the most common situations. They can be divided into two groups:

Discrete algorithms, which solve problems that can be reduced to a finite number of steps

Continuous algorithms, which solve problems that require us to make a few calculations for each step or that allow us to follow an infinite number of steps (the so called calculation of digits). These algorithms are based on methods discovered by mathematicians and physicists, such as Newton’s and Leibniz’s methods.

Searching Algorithms

An algorithm that searches through a given range of numbers (or a certain number of items) for the search target is known as searching algorithm. Examples of such algorithms are “binary search” and “breadth first search”.

There are several ways to search for a target, which depends on its position in the set and the desired distance between it and each object. The simplest way is to choose a random position in the set containing all objects (elementary algorithm) or to choose any one element (extended elementary algorithm).

The efficiency of searching algorithms depends on the size of the set and on the efficiency of the method to get an answer. The larger the set, the more time will be needed to search it, while performing elementary searches, which are based on a simple approach. For example, in searching for a target in an array we can use binary search algorithm (if there is only one object in the array). It finds this object using two steps:

  1. In the initial step we compare the target with the middle element of the array.
  2. If the target is shorter than this element, we exchange them, since it means that it lies in a smaller interval than originally. If it is longer, then so is the interval where it can be found (after dividing the array into two sub-intervals). To narrow down this interval to just one element (this time not an object, but a value) we should repeat steps 1 and 2.

What is Shortest Path?

Shortest path is a vector of numbers representing an immediate succession of choices which lead from one point in network to another. All shortest paths can be found by using Dijkstra’s algorithm.

What is a Sorting Algorithm?

Sorting algorithms are techniques used to order items. In computer science, sorting algorithms are an important example of divide and conquer algorithm. Sorting can be done by bubble sort, merge sort, heap sort, or quick-sort. The most effective sorting algorithm is quicksort which relies on the divide and conquer method.

Ordering Algorithms

An algorithm for ordering the given set of objects is known as sorting algorithm. Different algorithms are used for the task:

Top-down: it starts with the number of most significant digits and uses them to position all numbers. For example, if we have a set of 100 records and we want to return the first five records, it is enough to find out the second digit in each record (the number of bits has 100 1s and 010 10s), throw away all numbers less than 0.5 and subtract 0.5 from all numbers greater than 0.5. The resulting set of numbers will give us five records.

Bottom-up: it starts with the smallest number and proceeds to increasing digits. After obtaining the least significant digit, it starts over with this digit and proceeds with higher digits, until a complete number is obtained. The algorithm is based on the fact that all small numbers are separated at least by one big number from bigger numbers.

Radix sort algorithm is used for ordering the numbers written in a certain positional system, for example, in our usual decimal system.

The great advantage of these algorithms is that they are simple and fast. The disadvantage is that the sorting algorithms cannot be used to sort numbers with different number of digits. For example, if we want to sort numbers with 5 and 6 digits it will not be enough to position second digit (this would give us two groups with 4 records each).

What is Recursion?

Recursion is a method of solving the problem in which one or more problems are solved inside another problem. Unconditional recursion is the process in which all these solutions are present at once. We can recognize it by the fact that if you look at any bottom element, it can be solved by one or several algorithms and, as a result, we can see several elements on top of it. The solution to such problems is often based on an algorithm called Z-order curve.

The number of recursive algorithms depends on the type of the problem being solved.

What is a Subrutine?

Algorithms often contain subroutines that are invoked in different ways. Two algorithms can be separated into several subroutines that can be called by the user according to her will or by their similarity. For example, you can create two programs with similar code – but one of them is special for your routine and another for your spouse’s routine.

What is a Flow-Chart?

There are three traditional methods of describing computer programs: flow chart, function table, and program listing. Flowchart is a graphical representation of computation. A flow-chart is a diagram with points that show sequence of operations and connection of blocks. All algorithms are represented on the flow-chart by nodes. A node is like a “unit” or a “point” on the flow-chart.

What is a Turing-Machine?

A Turing machine is a mathematical model of a simple computer. A Turing machine can represent any algorithm which has discrete, list-like steps. The simplest Turing machine has only one input and one output – it accepts or prints an input symbol on the tape and then moves the tape to the next position.

What is Bin Packing?

Bin packing is a method of packing objects (usually of the same kind) into separate bins so that fewer bins are needed than objects. Bin packing algorithm can be used to optimize the storage space for a set of items. So a certain number of discrete items must be placed into a finite number of bins, each having a finite capacity.

The class of Bin Packing problems is NP-complete. This means that if there exists a polynomial time algorithm to solve the Bin Packing problem, then there exists a polynomial time algorithm for all problems in the NP problem class.

Bin packing problems appear in many real-life situations, such as farming, shipping, and waste disposal.

What is the Difference between Bin Packing and Bin Covering?

Bin-packing is the problem of assigning a set of items to bins, so that the sum number of items packed into each bin is smaller or equal than the capacity of a bin. Bin-covering is the problem when the sum of items in a covered bin is equal or greater then the capacity of the bin.

What are Heuristics?

Heuristic methods, or heuristic algorithms, are algorithmic methods that do not guarantee to find a solution, but find a solution with a reasonable expectation (i.e., probability) of being optimal, or find close to optimal solutions in reasonable time.

What is a Dynamic Programming Algorithm?

Dynamic programming is an efficient algorithm for finding solutions to recurrence or other optimization problems. In such problems, we have to find an optimum solution that satisfies certain constraints. Implementation of dynamic programming algorithm is often based on two factors: recurrence and overlapping sub-tasks.

What is a Meta-Algorithm?

Meta-Algorithm is a set of meta-rules that are used to create an algorithm. A meta-rule is a rule for creating rules. Meta-algorithms can contain meta-rules that are used for creating algorithms and other kinds of objects such as data structures or design diagrams. Meta-algorithms can be utilized in fields, such as mathematics and computer science, where finding and proving correct solutions to problems are time consuming.

A meta-rule is a rule for creating rules. Meta-algorithms can contain meta-rules that are used for creating algorithms and other kinds of objects such as data structures or design diagrams. Meta-algorithms can be utilized in fields, such as mathematics and computer science, where finding and proving correct solutions to problems are time consuming.

What is an Evolution Algorithm?

Evolutionary algorithms are a class of computational algorithms that use mutation and selection to evolve solutions. In an evolutionary algorithm, each generation of the population is generated by copying or “filling in” an individual from the previous generation with random values swapped for some of the parameters. The resulting individuals are then sorted into a population based on a fitness function. This process is repeated until the objectives for evolution have been achieved.

Evolutionary algorithms are useful because they can perform complex computations and solve problems that a human could not as a rule. Another reason is that, by using evolution, a problem’s solution could be solved quickly and efficiently with little human intervention (although some problems do require some human intervention).

What are Genetic Programming?

Genetic programming is used to solve problems that involve a large number of similar requirements. In order to obtain solutions to those problems, they can be represented as an individual configurations of input values and output values.

The Genetic Programming software is a function that works with the idea of “strings.” Strings are strings that represent gene sequences. Genes are nodes in an ordered binary tree known as a “graph.” A graph can have many layers of nodes, each of which is known as an ancestor (mutation). DNA and RNA have graphs with multiple layers.

Genetic programming (GP) is a programming technique inspired by biological evolution. It uses the Darwinian principle of reproduction to generate computer programs that solve problems artificially. The resulting programs are often shorter, faster, and more bug-free than the best human programmers could produce.

What is Particle Swarm Optimization?

Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm that is inspired by social behavior and bird flocking. The algorithm was introduced in 1995 by Kennedy and Eberhart and is a member of the class of Evolutionary Algorithms.

The PSO can be used to find the optimal solutions to many different problems, including wireless network design, online storage system design, video game playing strategies, job scheduling, stock trading and robot motion planning.

What is the Evolution of Algorithms?

Attila Benko, Gyorgy Dosa and Zsolt Tuza in 2010 has introduced a new method that has been called: “Evolution of Algorithms”, published in 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA): https://ieeexplore.ieee.org/document/5645312 . Benko, Dosa and Tuza dealt with a new problem that they called Bin Packing/Covering with Delivery. Mainly they looked for not only a good, but a “good and fast” packing or covering.

They designed polynomial-time algorithms finding exact offline solutions on a large class of problem instances, prove non-approximability in general, and also proposed a new method that they called the “Evolution of Algorithms” for the online setting.

In that case of such methods a neighborhood structure is defined among algorithms, and using a metaheuristic (like simulated annealing) the best algorithm has been chosen to solve the problem. They had shown the efficiency of their proposed method by several computer tests.

Many more interesting articles at PC Ocular site.

Benkő Attila is a Hungarian senior software developer, independent researcher and author of many computer science related papers.

Leave a Reply