As intelligent machines, humans have the ability to not only gain knowledge through experience but also to motivate themselves to gain knowledge that is not necessarily of immediate or practical interest to them. Humans can therefore accumulate knowledge through experience, judge whether this knowledge is important to them, and act on it accordingly. This ability is traditionally called "wisdom" and its attainment is sometimes thought to require long periods of time. Deductive and inductive reasoning, trial and error, and serendipity all come into play in the attainment of wisdom, and it is interesting to note that how different these are as cognitive mechanisms. Wisdom can result from concentrated efforts in deduction and induction, as the growth of science and technology can attest to. But trial and error is also a powerful way to get a new technology off the ground, and many discoveries in science have occurred by accident, with no aforethought on the part of the scientists involved. So humans are equipped to handle rigorous thinking, hypothesis generation, patient tinkering, and unexpected rare events.
Whether or not these methods of cognition, and the resulting wisdom can be incorporated into non-human machines is of great interest, both in terms of its application and from an academic standpoint. The non-human intelligent machines of today are primarily ones that concentrate on particular domains of expertise, and reasoning capabilities in multiple domains is for now not available in these machines. These machines can gain knowledge from experience, but their ability to put this knowledge to use in achieving goals outside of the domain in which they learned has not been achieved, although there are some signs that this will be accomplished within the next decade.
The contributors to this volume have described various methodologies for how this might be attained, how indeed "wisdom" might be achieved in non-human machines. Some of the authors get too embedded in the philosopher's quicksand of endless rhetorical constructions, but the book as a whole is worth the time to study. It remains to be seen of course whether some of the ideas in this book will be applicable to the goal of building artificial "wise systems." Readers though will gain knowledge of some of the reasoning patterns that are normally not touched on in textbooks and monographs on artificial intelligence, such as the notion of abduction.
Indeed, the first article of the book endeavors to explain how the Peirce conception of abduction could be used to construct machines that can exhibit creativity. The author not only describes the contributions of Peirce on how hypothesis generation is done using abductive reasoning but he draws on the work of a few computer scientists that have brought about what is now called abductive logic programming. Peirce's notions on hypothesis generation could be classified as affective, and definitely something of interest in the field of cognitive neuroscience. Peirce viewed abduction as partly guessing and involving "flashes of insight", two types of reasoning patterns that would be difficult to implement or formalize in a machine. The author does not give any constructive hints on how to do this unfortunately. There are systems available commercially, particularly for the case of network troubleshooting and event correlation, that implement abductive reasoning patterns.
The second article is more philosophical, but it does get down to the task of characterizing just what a "sapient" agent is. These are agents that can exhibit insight and have the capability of exercising sound judgment. In particular they must be able to explore, but the authors do not say if this exploration is generated by the machine itself (making it a machine that exhibits curiosity), or whether this exploration is induced by a problem that it encounters or is presented to it. Machines that exhibit curiosity are not yet available technologically, but when they are invented they will be formidable knowledge generators. The article though is interesting in that the authors view networks as playing a fundamental role in implementing sapient agents. Their reasoning is by analogy to the complex connectivity of the brain, the complex connectivity of semantic networks, and the "co-occurrence" networks of speech production patterns. Sapient agents therefore will need to have complex cognitive functionality, the authors assert, and this is dependent on complex connectivity. To obtain this, a mechanism for the growth of small networks into complex networks is needed. They then propose a "epigenetic" mechanism for this purpose, and explain its workings, interestingly, via what are called "pragmatic games." Their discussion sounds plausible, for they show quantitatively how knowledge evolves in their networks. They assert that this methodology will also work for agents that are cognitively autonomous, allowing them to adapt their behavior so as to optimize the process of knowledge acquisition. If successful, this approach would be one more step towards constructing a machine that displays curiosity.
Things get more rigorous in the third article of the book, wherein the author outlines a mathematical theory of sapience and consciousness. Developing such a theory is an enormous challenge, due to the presence of affective states in the human brain (mind). The author subsumes these states under a broad concept that he calls the "knowledge instinct", and he models this by what he calls "modeling field theory" and "dynamic logic." For the case of human machines, this knowledge instinct is innate and is a drive humans have to extend their knowledge bases. To implement the knowledge instinct in a non-human machine requires in the author's view a neural architecture that forms what he calls a "heterohierarchy." This is a mathematical structure consisting of layers of "concept models" and where adjacent layers can have multiple feedback connections. Each concept model is supposed to encapsulate the knowledge base of the machine. The concept models generate "top-down" signals that interact with the "input" or "bottom-up" signals. The collection of concept models is not static however, since new ones must be formed in order to better correspond to the input signals. The interaction of concept models and input data is governed by the knowledge instinct. What is interesting about the author's proposal is that each level is subject to the same laws that govern the interaction dynamics. The dynamics is therefore independent of the specific content coming from the inputs, i.e. it is independent of the domain. Thus the learning dynamics of the concept model is the same whether it is trying to learn how to compose a musical composition or learn the rules of chess. The author does not really describe how the knowledge instinct "drives" the learning process. How does it organize or prioritize what is to be learned? And is it "insatiably curious" meaning that it will attempt to learn, i.e. generate or modify concept models for, every input that is presented to it? The author does not elaborate on this question, but instead concerns himself more with the accuracy of the resulting concept models, which he quantifies using a "similarity measure." Future revisions of the models are organized by a "skeptic penalty function" which grows with the number of models and assists in the maximization of the similarity function. The use of "fuzzy dynamic logic" is supposed to resolve the apparent issues with the computational complexity of the learning process. And the author gives a curious interpretation of the increasing of the similarity measure at each iteration of the learning: this type of system "enjoys" the learning.
Note: Only the first three articles in this book were studied by this reviewer, and so reviews of the others are omitted.
Author(s): Rene V. Mayorga, Leonid Perlovsky
Edition: 1
Publisher: Springer
Year: 2007
Language: English
Pages: 240