Intelligent Computational Agents Require a Designer

Download PDF
This article was originally published as a chapter in the book “Design and Catastrophe: 51 Scientists Explore Evidence in Nature"

In The Grand Design, Hawking and Mlodinow state that “The example of Conway’s Game of Life shows that even a very simple set of laws can produce complex features similar to those of intelligent life…. As in Conway’s universe, the laws of our universe determine the evolution of the system, given the state at any one time.”[1] According to Hawking and Mlodinow, preexisting physical laws are responsible for the complexity observed in intelligent beings. However, can even simple laws, just as the laws in Conway’s Game of Life, be generated fortuitously? What do we learn about laws, design, and complexity from the world of computation?

The Game of Life

The Game of Life, also known simply as Life, is a cellular automaton devised by the mathematician John Horton Conway.[2] The universe of the Game of Life is an infinite, two-dimensional orthogonal grid of square cells, each of which is alive or dead. Every cell interacts with its eight neighbors according to Conway’s simple genetic laws representing survivals, births, and deaths. The evolution of the game is defined by its initial configuration. The initial pattern of cells constitutes the seed of the system. The first generation is created by applying the genetic rules simultaneously to every cell in the seed. Births and deaths occur at the same time. The rules continue to be applied repeatedly to create further generations. Today, there are numerous computer programs to generate Game of Life configurations.

During the construction of the Game of Life, “Conway chose his rules carefully, after a long period of experimentation.”[3] These rules were not the result of mere chance. In fact, in order to create intelligent computational agents able to produce complex features or behavior, it is necessary to establish rules that govern the game. No matter how simple they are, rules need to be established by someone.

The Complexity of Intelligent Computational Agents

In engineering, we are used to solving deterministic problems in which the solution solves the problem all the time (e.g., 1 + 1 = 2). However, there are several problems in which the solution is not deterministic. For instance, the underlying software of an autonomous car needs to recognize whether an object is a stoplight. For humans, solving this kind of situation is easy. However, machines struggle in computer vision and other complex areas such as speech recognition.

Machine learning has emerged to solve these kinds of problems. Machine learning is a subfield of artificial intelligence that explores algorithms that can be said to “learn.”[4] Someone could argue that machine learning has the potential to enable the auto-construction of software with programs that program themselves. As a result, programmers will become obsolete. However, even in that case, the algorithms that are used to create the predictive models of the “artificial programmer” need to be created by someone. The creation of such algorithms is not spontaneous, even in the case of the simplest classification algorithms that exhibit brilliance, such as the k-nearest neighbors algorithm. Neither is spontaneous self-creation the case with genetic algorithms and dynamic evolution of software.

In the case of genetic algorithms, which are inspired by Darwin’s theory of evolution, it is necessary to clearly adjust the following rules to reach an optimal solution: (1) selection rules that select the parents who contribute to the population at the next generation; (2) crossover rules that combine two parents to form children for the next generation; and (3) mutation rules that apply random changes to particular parents to form children.[5]

Similarly, dynamic evolution of software is not accidental. Under the closed-world assumption, the possible events that an autonomous system has to face at runtime are fully known during construction. Nevertheless, it is difficult (or impossible) to foresee all the situations that could arise in the uncertain open world. In order to deal with unexpected events in the open world (e.g., surprise security threats), dynamic evolution of software supports the gradual or continuous growth of the software architecture. Dynamic evolution of software does not imply only punctual adaptations to punctual events, as dynamic adaptation in the closed world does, but a gradual structural or architectural growth into a better state. The construction of autonomous systems that support dynamic evolution requires the application of sophisticated methodologies and computational mechanisms.[6]

Conclusion

In any engineering project, design comes first and then construction follows. In the case of creating intelligent computational agents, engineers have to follow a strict methodology to collect the data that are going to be used to train them, prepare or clean the input data, create or reuse an artificial intelligence algorithm, develop predictive or descriptive models, test the model to measure its quality, and deploy the model. Every step involves extensive reasoning and complex construction.

No matter how much intelligence a computational agent displays, a designer is necessary to define the context of execution, the underlying laws or rules in this context, and the algorithm that indicates how the agent acts in the context. There is no such thing as chance in engineering. It can be said with confidence that rather than contradicting the need for a designer, intelligent computational agents are a great example of intelligent design.

NOTES

[1] S Hawking, L Mlodinow. The grand design. New York: Bantam Books; 2012, p. 179.

[2] M Gardner. Mathematical games—the fantastic combinations of John Conway’s new solitaire game “Life.” Scientific American 1970; 223(Oct):120–123.

[3] Ibid.

[4] R Kohavi, F Provost. Glossary of terms. Machine Learning 1998; 30(2–3) : 271–274.

[5] M Mitchell. An introduction to genetic algorithms. Cambridge (MA): MIT; 1996.

[6] GH Alférez, V Pelechano. Achieving autonomic web service compositions with models at runtime. Computers & Electrical Engineering 2017; 63(Oct):332–352.


Germán H. Alférez is a professor at the School of Engineering and Technology of Montemorelos University. He is the director of the Institute of Data Science at this institution. He holds a PhD in Computer Science from the Polytechnic University of Valencia. His scientific contributions are published in top journals, book chapters, and proceedings of international conferences. He has worked with universities, organizations, and research groups in four continents. His research contributions have been recognized by the National Council of Science and Technology (CONACYT) of the Government of México.