Fantastic History of Artificial Intelligence

Fantastic History of Artificial Intelligence

What is Artificial Intelligence?

Artificial intelligence, AI for short, is a series of computational theories and computer systems that perform tasks and exhibit intelligent behavior.
This article is about Artificial Intelligence. What is AI, what is the history of AI. What applications use AI nowdays.

From a more philosophical perspective, the idea of creating artificial intelligence raises questions such as whether computers can be used to simulate human intelligence or even replicate it wholesale; or whether it may actually be possible to create a machine with true consciousness (the philosophical “hard problem”).

AI in Daily Life

Artificial intelligence is not ubiquitous in daily life, but it has become part of popular culture. The possibility of using computers to emulate human intelligence captivates the imagination, and films such as “Ex Machina” and “Her” explore this theme. The mass media frequently report on developments in the theory and practice of artificial intelligence.

From a technical perspective, artificial intelligence involves creating systems with the ability to perform specific tasks that appear intelligent to humans. Such artificial systems are commonly referred to as “intelligent agents”, “artificial intelligence programs” or simply “AI” programs.

What is an Intelligent Agent?

Intelligent agents are systems that perceive the current environment, plan possible actions, and execute these actions in order to achieve their goals. Intelligent agent functionality can also be implemented in software that is not specifically designed for intelligent agents, but is agnostic to the problem domain or flexible enough to solve a variety of tasks (e.g., a multi-purpose search engine).

Beginning of AI

The history of artificial intelligence begins in antiquity. The development of simple automata such as the hydraulic machines discussed in the Ctesibius, Philo of Byzantium and Vitruvius treatises may have indicated an early curiosity about mechanical thought. By the time of the Renaissance, artificial automata had become a common theme in science and fiction.

A French mathematician and philosopher René Descartes (born in 1596) published works on the foundation of intelligent world-view a so called racionalism. Later on a philosopher Christian von Wolff (born in 1679) published works on mathematical method, outlining his vision of rational thought. The development of artificial beings called “philoids” (rational actors) is discussed in the book “Theory of Moral Sentiments” by Scottish moral philosopher and economist Adam Smith (born in 1723) in 1759.

Precursors to Modern AI

The work of british polymath Archibald Spencer (born in 1698) and the pioneering artificial life work of Warren Weaver (born in 1894) are also important precursors to modern AI.

In 1804 Joseph-Marie Jacquard, a French inventor and engineer, demonstrated the “pric test”, demonstrating the first loom capable of weaving an entire pattern by hand without benefit of a power source or mechanical system. He patented the invention while running his weaving mill at Lyon, France. In this regard, it is regarded as a notable example of artificial intelligence. The invention was known as the “Jacquard loom”. The loom’s production of cloth was not a well-kept secret but other implements and processes for weaving were still quite primitive.

What is a Turing Test?

In 1950, renowned British computer scientist Alan Turing proposed the Turing test (imitation game) to judge whether a machine was capable of thinking. The test involves an interrogator asking a series of questions to both a human and a machine, and then deciding which is the machine. The question “Can machines think?” has intrigued people for centuries, but Turing proposed an alternative question: “Are there imaginable digital computers that would do well in the imitation game?”

This question addresses just one aspect of thinking – understanding language – and leaves out many other aspects of thinking such as emotional intelligence. The question “Can machines think?” has gained popularity with the computer security community as a criterion for evaluating whether a piece of software is capable of automatic speech recognition.

Artificial Life

Artificial life is a field within the study of artificial intelligence that focuses on the development of artificial systems such as computer games and virtual worlds which are capable of performing many activities that appear to be natural to living organisms, often including learning and adaptation.

In the mid-1960s, biologists were forced to develop mathematical models to understand the behavior of natural systems. Computer models began to be used to simulate biological and sociological systems. In particular, researchers in artificial life developed simulation techniques that mimic biological processes such as evolution and learning. Many scientists in artificial life believe that artificial life should extend beyond computer simulations of natural creatures (see “Are there imaginable digital computers that would do well in the imitation game?” by Alan Turing).

In the 1980s, the AI field of computational neuroscience developed. Computational neuroscience is the study of biological neural networks and how they can be used to create artificial neural networks. The term neural network is used because its models implement simplified versions of structures found in biological brains such as neurons and synapses.

Example Application Nowdays

In 2014, evidence emerged that artificial intelligence was significantly boosting crop yields for farmers by reducing crop losses from pests and increasing yields from planting decisions.

Emotional Intelligence

One of the problems with artificial intelligence is that it lacks emotional and social intelligence. Some researchers are attempting to recreate emotional and social intelligence in robots. For example, Pixar has created a production algorithm for animated characters that uses emotional feedback. This is intended to make the characters emotionally engaging and believable across a wide variety of situations (rather than having them respond with scripted responses). This research has been limited to animations because creating an artificially intelligent robot capable of responding in a believable manner to arbitrary situations would be difficult, at least at this stage.

Human-robot Interaction

To be able to control robots in the future, people need to be able to interact with them. Human-robot interaction studies ways in which humans can get along with machines. The technology includes such methods as speech and facial recognition, natural language processing and synthesis, dialogue systems, telepresence (such as remote robotic control), social robotics, human-like robot appearance (e.g., robot form factors and android simulation), and autonomous robot behavior.

Symbolic AI

The initial definition of artificial intelligence was “the science and engineering of making intelligent machines, especially intelligent computer programs”. This is known as the symbolic AI approach. It contrasts with connectionist models in which there are no symbols, only connections (neural networks). The term “artificial intelligence” was introduced in summer of 1956 by John McCarthy (born in 1927).

The symbolic AI approach to AI gained credibility after the success of the program ELIZA (by Joseph Weizenbaum in 1966). The success of ELIZA demonstrated that a program could simulate a human to the extent that it was indistinguishable from a real person (or at least almost in a sense).

Society for AI

In 1956, the Dartmouth Summer Research Project on Artificial Intelligence was established by the U.S. Department of Defense to support the study of computer intelligence. As part of the research, Marvin Minsky wrote MIRIAD (1962). In 1970s, Minsky proposed The Society for Artificial Intelligence (SAI), which became one of the major AI conferences in the late 1960s and 1970s.

An alternative version of AI, called “connectionism”, represented by researchers such as David Rumelhart and James McClelland was based on the idea that the human brain is made up of a large number of independent but interconnected processing elements called “nodes”. Connectionists argued that cognitive processes are the result of nodes acting in concert to process information. Proponents argued that connectionism would be capable of modeling complex phenomena such as learning and memory—features believed to be beyond the capabilities of symbolic processing.

Robotics

Hans Moravec (born in 1948), an AI researcher from the Robotics Institute of the Carnegie Mellon University, explained in an article in the 21st century how robotics and automation could lead to a post-scarcity world.

Machine Learning

Machine learning is a key aspect of AI research. In machine learning, a computer program is presented with data and can take various actions—learn—based on what it “sees”. Machine learning uses “training” data to define what the computer “sees”, and provides it with feedback based on whether its actions were correct. Training data is used to find features in the data which are then used to classify new data.

Bayes Network

Hierarchical Bayesian networks are an example of machine learning that enables the AI systems to learn from given examples. The hierarchical Bayesian network consists of a number of nodes, each connected to the previous node, and each having a value or probabilities assigned by Bayesian inference. The hierarchical Bayesian network can be trained by tracking the changes and movements of the nodes along with their value changes. The movement of the nodes can be done by machine learning algorithms, e.g. a perceptron or a neural network.

The hierarchical Bayesian network is used mainly for classification of data into multiple categories by assigning each category a node and having edges between nodes showing the characteristics that are related to each other. The hierarchy is built by having some of the nodes as child nodes of other parent nodes. The bottom level can have single nodes which represent individual characteristics, while higher levels can combine these characteristics into groups.

Neural Network

A number of different types of neural networks have been developed, including “feedforward neural nets”, “perceptrons”, “neural turing machines”, and “recurrent neural networks”. These models are commonly used for tasks such as pattern recognition and object recognition. One of the early applications of neural networks was in the design of automated essay-writing programs, such as Eliza.

Recurrent neural networks (RNN) are a form of feedforward neural network. They model sequential data by using an LSTM (Long Short Term Memory) unit, which has been shown to be useful for dealing with sequences such as word or sentence sequences. RNNs can also be used to detect patterns in data such as speech or handwriting. One of the recurrent neural network’s most remarkable abilities is that it can learn to read and write from very large dataset (such as speech) or very short data.

Learning Computer Programs

The field of machine learning, in its simplest form, is about learning computer programs from data. Most machine learning researchers are concerned with problems in categorization, recognizing objects in images, and translating natural language into human-readable form. Other types of machine learning include reinforcement learning, where the computer seeks to maximize a reward function, and evolutionary algorithms, where the computer seeks to improve performance through survival and reproduction over multiple generations.

Other major branches of artificial intelligence include robotics and cognitive science.

Intelligent Machines

Robotics is a subfield of artificial intelligence that deals with creating intelligent machines. It is also called “autonomous agents”, which is also sometimes used by these different approaches to distinguish roboticists from AI researchers who work on intelligent computers. The field has its roots in the early days of computing, where it was explored by scientists such as Alan Turing and Norbert Wiener.

In the 1970s, the field of robotics advanced significantly with progress made in Artificial intelligence and computer science. The first line of thinking for how to mimic a human arm was a model that studied the Legged Mobile Robot (LMB). Early robots were not capable of performing complex tasks on their own. Some early robots included an arm that could move one way only without any problems. Simple reactions were the basis for these robots.

Robotic Technology

Joseph Engelberger (born in 1925) was a pioneer in robotic technology. He founded Unimation, which became the first company dedicated to manufacturing robots. The company’s industrial robot, called Unimate, was the first commercial application of robotics technology. A new term had to be invented for autonomous robots; thus “Robotics” was coined by Fomin and Ajmera in 1959. Robotics research accelerated in the late 1960s and early 1970s, with the invention of the robot arm and the development of artificial intelligence programs to control robots.

Cognitive Computer

A cognitive computer has a central processing unit, memory, and input/output devices. The basic idea behind a cognitive computer is to create a machine that can function as any other brain would function. The way this is done is by trying to model neural networks of the human brain in order to make it learn from its own errors in order to increase its problem solving capabilities. There are three approaches to how a cognitive machine might work.

Neurons and Synapses

One approach is to use a neural network. A neural network is an interconnection of neurons and synapses that represent numerical information. A new approach in artificial intelligence uses the same basic mechanics of the brain itself, but it does not use neurons and synapses. These are often called “neural chips”.

Experience Simulation

A second approach to creating cognitive machines is by learning from experience through simulations in the same way that humans learn from experience.

Thinking Machines

A third approach is to apply artificial intelligence techniques directly to the problem of creating a thinking machine.

Robotic Agent

Although many ideas were brewing in the background, the actual development of general artificial intelligence did not begin until philosophers Russell and Norbert Wiener began publishing their ideas on how to implement a robot that could act like a human. They originated the notion of “computing machines” representing information, initially by a “digital computer”. In 1956, John McCarthy presented his formally defined notion of “artificial intelligence” at Dartmouth College. This concept became the cornerstone of modern artificial intelligence, which has been developing considerably since that time.

Chess and AI

Some of the advancement in AI was driven by a desire to win the Chess game. In 1950, Alan Turing proposed a test known as “The Imitation Game”, more commonly referred to as “The Turing Test”, in which a human judges conversations between two robots and a human. If the human cannot reliably tell which is which, then the machine can be said to exhibit intelligence.

Theory on Logic

There was a long hiatus in research on artificial intelligence after the submission of “Logic Theorist” in 1955. Part of this was due to disappointment after the failure of the logic theories to fully describe human abilities, but was also due to a general fad for symbolic approaches to the field. Among those who worked in artificial intelligence during this period were Marvin Minsky, Allen Newell, and Herbert Simon.

Swarm Intelligence

In 1956, Minsky and Newell presented their now-famous article “Swarm Intelligence”. The central idea of Swarm Intelligence is to apply learning principles at a population level, rather than just at the individual level. This idea was developed in computer science as the theory of metaheuristics. It also inspired cybernetics and is sometimes called “fitness landscapes”.eeling that more basic problems needed to be solved before progress could be made on these ‘higher’ levels.

In addition to these research projects (as well as a number of other ongoing projects), there have been numerous patents approved for artificial intelligence technology in the U.S. since the mid-1950s.

High-level AI

In 1961, the Dartmouth Summer Research Project on Artificial Intelligence was established in support of research on “high level” AI. A number of groups were investigated, including the University of Wisconsin–Madison and the Massachusetts Institute of Technology. In particular, Minsky’s group at MIT produced an influential paper called “Perceptrons” then did a variety of other important work before splitting off from MIT in 1966 to form the Artificial Intelligence Laboratory (AIL).

AIL became widely known for its work on the FERUT programming language and its MAPLE theorem prover, which was a precursor to the modern theorem prover developed by Adleman and Newman. The team also developed a robot called Shakey that could provide psychology-like advice to mechanical operators.

AI and Language

Another important project in this period was the Logical Language Group at MIT led by Kenneth E. Iverson. A number of very influential languages were produced by these researchers, including the popular LISP programming language.

The 1970s saw a dramatic increase in government funding, for example ARPA had roughly doubled its funding of AI research from the $30 million that it had given out in the late 60s, to $70 million by 1974. As a result of the 1970s energy crisis there was also a desire to have work on “high level” problems like AI rather than just technical “grunt-work”.

New Generation of AI Systems

The year 1980 also witnessed the collaboration between the “Artificial Intelligence Laboratory” and the Stanford Research Institute (SRI) to produce a new generation of AI systems which became known as frame-based approaches to Artificial Intelligence. It was believed by many of the researchers at SRI that a combination of principles from both symbolic and connectionist theories would be ideal for building intelligence.

In 1982, John McCarthy stepped down as director of CMU’s AI Lab to become head of the Computer Science Department at Stanford University, with DARPA funding.

In 1983, William D. (Bill) Patrick replaced John McCarthy as director of the AI lab and set up a new group working on the “application level”. Bill was a Yale-trained researcher who spent six years at Caltech before moving to Stanford in 1981.

AI and Publishing

Bradford Books was founded in 1984 as an imprint of MIT Press. The press published many seminal texts in AI including work by Schank and Abelson, Russell and Norvig, Pearl, and Winston.

The title of the 1982 book was “Artificial Intelligence: A Modern Approach”. It was edited by Patrick Winston, and contains a number of seminal papers. The book is widely cited.

In 1984, the National Academy of Sciences published a report entitled “Artificial Intelligence” that noted that over 60 years had passed since Alan Turing conceived of this new field. The report expressed concerns about whether AI had progressed enough to deserve the name and suggested that it should be renamed “applied intelligence”.

In 1987, the MIT Artificial Intelligence Laboratory was renamed the Massachusetts Institute of Technology Artificial Intelligence Laboratory (MIT AI Lab) to reflect its expanding mission and its collaboration with other institutions on AI research.

The term “artificial” (originally meaning, “man-made”) was abandoned in favor of the more general “intelligence”, and research in artificial intelligence refers to any of the approaches that seek to make intelligent systems.

AI Research

Much AI research has been motivated by human desire for computer simulations that can mimic human behavior. Human-level intelligence was defined later as the ability of a computer to do work of a similar complexity, reliability and safety to human beings.

In 1991, Marvin Minsky made the case that “a true intelligence will eventually be able to duplicate itself”. This has become known as the “intelligence explosion”. In 1995, Stephen Wolfram published a paper on computational complexity which he calls “the great refutation” of the claim about an intelligence explosion.

In August 2011, IBM’s Watson beat two human champions to win the US game show “Jeopardy!” In February 2011, the state of Wisconsin pledged $200 million to build a new campus from the Massachusetts Institute of Technology (MIT) dedicated to artificial intelligence research. The Israeli government announced plans in May 2011 to invest heavily in AI and robotics research and development.

Further Toughts on AI

There are several problems with progress in artificial intelligence. First, there is the challenge of making a computer intelligent enough to do work of the same complexity and safety as a human being. Second, there is the challenge of making machines that are robust to mistakes. Third, there is the problem of very large datasets, which are difficult to handle by even advanced computers.

The first approach to AI has been to try to achieve human-level intelligence in computers using symbolic methods such as logic or mathematical logic programming. This was the main focus of research during the early and middle decades of the past century. Symbolic methods are very well equipped to handle many problems, but also difficult to realize in practice. It is generally believed that the capabilities of AI will take a long time to be realized. The field has encountered setbacks such as the failure of solvers for nearly all NP-complete problems, and much progress has been made on simplifying and formalizing symbolic methods.

AI Research Approaches

A large part of the AI research in the past decades has been directed toward achieving human-level intelligence in computers. This means that AI must solve problems that are of a complexity similar to those that humans can solve.

One approach is to consider the capabilities of actual humans and then set up ways for computers to achieve these capabilities using various methods such as neural networks. Some researchers were inspired by biological neurons as a model for artificial ones, but it proved difficult to make them work well.

Conclusions

This paper introduces artificial intelligence and its applications. Artificial intelligence is a way of making machines that captures the same ways that human beings function. While the ultimate goal is still to make human-level artificial intelligence, there are many useful applications of AI, such as robotics and game-playing.

Artificial intelligence has been a popular subject of discussion among philosophers and hard science fiction writers since the 1950s. The term artificial intelligence was coined in 1956 by John McCarthy. Historically, the development of AI has been driven by a search for intelligence in machines rather than animals or other entities.

The Turing test is the measure of whether a robot has cognition: it is passed if when interacting with a human the robot mimics the thought processes and proves that it is sentient and intelligent to an appropriate degree, as defined by its creators.

AI Nowdays

In the original sense of the term, “artificial” means made by human beings. Nowadays, “artificial” means that a machine was designed and built, not that it behaves in an artificial manner. The term “intelligent” describes a machine’s ability to carry out a task that is commonly associated with intelligent beings.

An artificial intelligence program attempts to simulate the cognitive functions of human beings. AI research has developed methods for tasks such as problem solving, pattern recognition, and speech recognition.

 Attila Benko (University of Pannonia, Hungary) and Cecília Sik Lányi (University of Pannonia, Hungary): History of Artificial Intelligence Encyclopedia of Information Science and Technology, Second Edition

For more, please visit our site often: PC Ocular

Benkő Attila is a Hungarian senior software developer, independent researcher and author of many computer science related papers.

Leave a Reply