Ac I B 2011 Lindley Reprint

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Synthetic Intelligence: Beyond Artificial Intelligence and Robotics

Craig A. Lindley
Blekinge Institute of Technology, SE-371 79 Karlskrona, Sweden [email protected]

Abstract The development of engineered systems having properties of autonomy and intelligence has been a visionary research goal of the twentieth century. However, there are a number of persistent and fundamental problems that continue to frustrate this goal. Behind these problems is an outmoded industrial foundation for the contemporary discourse and practices addressing intelligent robotics that must be superseded as engineering progresses more deeply into molecular and biological modalities. These developments inspire the proposal of a paradigm of engineered synthetic intelligence as an alternative to artificial intelligence, in which intelligence is pursued in a bottom-up way from systems of molecular and cellular elements, designed and fabricated from the molecular level and up. This paradigm no longer emphasizes the definition of representation and the logic of cognitive operations. Rather, it emphasizes the design of self-replicating, self-assembling and self-organizing biomolecular elements capable of generating cognizing systems as larger scale assemblies, analogous to the neurobiological system manifesting human cognition.

1. The Limitations of Cognitivism, the Top-Down Path to Intelligence


Historically, cognitive science has emphasised attempts to understand human cognition in terms of an information processing metaphor (e.g. see Thagard, 2010). Here this is referred to as the cognitivist perspective (characterized in detail by Harnad, 1990). A central theme within cognitive science is the project of artificial intelligence (AI), i.e. the computational synthesis of behavior that, when performed by humans, is regarded as manifesting intelligence. For cognitivism, replication of intellectual behavior by a computer system provides evidence that the computer program underlying that replication embodies an adequate theory and explanation of the human intellectual processes that it seeks to model. Processing Published in in Integtal Biomathics: Tracing the Road to Reality , Simeonov, Plamen L.; Smith, Leslie S.; Ehresmann, Andre C. (Eds.), Springer, 2012.
1

of represented knowledge structures has typically been accomplished by deliberation, where the link from sensor or input data to action production or output data is mediated by knowledge-based planning or logical reasoning. In the context of robotics, Arkin (1998) refers to these approaches as Sense-Plan-Act approaches, while Brooks (1999) refers to them perhaps more accurately as Sense->Model>Plan->Act (SMPA) approaches: the essential idea is that an agent (a robot or a human being) receives sense data about the world, uses that data to update a symbolic representation of the world, processes that representation using logical reasoning in order to create a plan for what to do, and then executes the next temporal element of the current plan. Knowledge representation and reasoning are at the core of SMPA systems. AI systems based upon knowledge representation and reasoning have been called Good Old-Fashioned AI (GOFAI, Hayes et al, 1994), since they are very clearly based upon Newell and Simons (1975) physical symbol system hypothesis that: A physical symbol system has the necessary and sufficient means for general intelligent action., where: A physical symbol system consists of a set of ent ities, called symbols, which are physical patterns that can occur as components of another type of entity called an expression (or symbol structure). Thus, a symbol structure is composed of a number of instances (or tokens) of symbols related in some physical way (such as one token being next to another). At any instant of time the system will contain a collection of these symbol structures. Besides these structures, the system also contains a collection of processes that operate on expressions to produce other expressions: processes of creation, modification, reproduction and destruction. A physical symbol system is a machine that produces through time an evolving collection of symbol structures. Such a system exists in a world of objects wider than just these symbolic expressions themselves. The physical symbol system hypothesis has for many (perhaps most) AI researchers been the foundation of artificial intelligence, since it implies that a computing system is capable of manifesting intelligence. As Newell and Simon (1975) note, The notion of physical symbol system had taken essentially its present form by the middle of the 1950's, and one can date from that time the growth of artificial intelligence as a coherent subfield of computer science. It is the foundation of knowledge-based and deliberative AI, in which symbol structures represented as more formalised versions of the symbols used in human natural language are processed by algorithms based upon human logical inference. The physical symbol system hypothesis spawned a great deal of research that has generated many useful outcomes. Famous early examples include the expert systems: Prospector, an expert system for mineral exploration (Hart, 1975), MYCIN, for the diagnosis of blood infections (Buchanan and Shortliffe, 1985), and Dendral, an expert system for inferring molecular structure from spectrometer data (Lindsey et al, 1980). However, despite these and many other successes, there are a number of intrinsic challenges for GOFAI:

1.

2.

3.

4.

5.

Brittleness: Lenat and Feigenbaum (1991) observed that expert systems are narrow in their domain of successful application, and very brittle at the edges, i.e. they are not robust when usage is not restricted to narrow circumstances. Lennart and Feigenbaum proposed that the solution to this is to embed specialized expert and knowledge systems within a more general environment of represented common sense knowledge that supports reasoning about their applicability and adaptation for broader purposes. Cyc (Cycorp Inc., 2002) is a project to create this common sense knowledge base, although the resulting knowledge system has had limited applications to date. The Knowledge Acquisition Bottleneck is the problem of acquiring knowledge, which may also be referred to as the knowledge engineering bottleneck if the whole system lifecycle is considered. The bottleneck refers to the difficulty of extracting knowledge from primary sources in such a way that it can be represented within a GOFAI system, and then effectively maintaining and updating it (Cullen and Bryman, 1988). Wagner (2006) summarizes four aspects of the knowledge engineering bottleneck: i) narrow bandwidth, referring to the very limited channels of converting knowledge from its initial sources, ii) acquisition latency, a significant gap between when explicit knowledge is created and when it is made it available where it is needed, iii) knowledge inaccuracy, created when experts make mistakes, knowledge engineers make misinterpretations, or errors are introduced during knowledge maintenance, and iv) the maintenance trap, that a knowledge system becomes increasingly difficult to maintain as it expands, and more so as it accrues errors. Multiple Experts: when more than one expert is involved in the knowledge acquisition process, it can be very challenging (and perhaps impossible) to gain their agreement or consensus on a representation of valid domain knowledge (Medsker, Tan and Turban, 1995). Context: The brittleness of knowledge systems immediately raises the well-established problem of context (e.g. see Schilit, Adams and Want, 1994, Dey, 2000). That is, for a system to have knowledge of its own applicability, it must have a representation of those contexts in which it is applicable or not, which is a regressive requirement. Of course, the scope of possible contexts is also unlimited, so the attempt to represent context is necessarily endless. A general solution to context in AI would be to build methods into a system for evolving its knowledge content in ways that reflect positive adaptations to dynamic contexts, but this is far beyond the means of existing knowledge systems in non-trivial domains. Continuous Change of both knowledge and its contexts, due to the normal ongoing development of knowledge and the dynamic nature of the world, places a limited temporal window upon the validity of a GOFAI knowledge base. Hence it is necessary to ensure that a knowledge base remains relevant within its operational context. For robots this problem concerns

6.

7.

the operation of perception and action generation in unpredictable and incompletely modelled physical environments; more successful solutions to the generation of basic behaviors have been based upon low-level, reactive and functional control systems (Arkin, 1998, Brooks, 1999), methods more closely associated with the mathematical, functional approach of Norbert Wieners cybernetics (see Storrs-Hall, 2007). However, these approaches have not reached higher levels of cognitive performance, that are typically implemented using GOFAI symbol processing methods on top of behavioral layers. Regression: Related to the problem of context, the need for representation as a basis for intelligence is endlessly regressive. As noted by Brooks (1999), representing the world suggests that it is not enough for the world to stand for itself. Hence understanding is mediated by a knowledge model. But this implies that understanding the knowledge model itself requires a knowledge model, and that model another model, and so on endlessly. Or if a single model is enough, why cant the world also be enough, such that reasoning, problem solving, etc. can be a direct reaction to sense data? Another way of putting this is that GOFAI sees an intelligent being as having a homunculus within it, observing and reacting to a model of the world. But then, the same must apply to the mind of the homunculus, leading to an infinite regression of homunculi within homunculi. Symbol Grounding (Harnad, 1990, Anderson, 2003, Taddeo and Floridi, 2005) is a fundamental problem arising from the terms of the formulation of the physical symbol system hypothesis of how the link can be maintained from knowledge representations to the things that they refer to, or more generally, how abstract symbols can acquire real-world meanings. For successful expert systems and knowledge based systems this link is provided by the authors of the representations and the users of the system for whom textual inputs and outputs can be read meaningfully within a context, as long as the system is well authored and its contexts of application are both understood and stable. Symbols, by definition, have a conventional relationship with their referents and meanings. An authored knowledge representation gains its meaning from the authors understanding of the meanings of the symbols used. But this understanding is not automatically transferred to a machine when it stores and processes binary sequences that are displayed in forms that to a human represent linguistic symbols. This is the problem described by Searles (1980) thought exper iment of the Chinese room: taking in tokens, processing those tokens by rules, and outputting other tokens as directed by the rules and according to the input tokens, does not require any understanding of the meaning of the tokens. This actually implies an alternative to the physical symbol systems hypothesis, that instead of intelligence being fundamentally tied to the ability to manipulate symbols, it may be tied to the ability to find symbols meaningful, and to be able to create and use symbols (or more generally,

representations) in ways that are not limited to manipulation within the constraints, and according to the production rules, of a formal language system. This can be regarded as an alternative view of AI as computational semiotics. The symbol grounding problem presents a very deep problem for representational AI, not simply because it cannot be made to work in its own terms (see Harnad, 1990, Taddeo and Floridi, 2005), but also because it is not necessarily plausible as an account of natural intelligence. For example, exemplar theories of conceptualization (e.g. see Murphy, 2002) imply that any representation of knowledge is a novel creation at the time that it is made, that is highly dependent upon the context and circumstances of its creation. Exemplar theories reinforce the view that a knowledge base is akin to a work of literature (e.g. Lindley, 1995), being an external authored symbolic artifact rather than a direct mirror and expression of knowledge as it is represented within anyones cognitive system. Of course there are many examples of successful knowledge base systems. But like any text, they are dependent upon external conventions of interpretation and usage to make them function effectively. The production of such a text is usually a painstaking process very different to the rapid decision-making of experts. Searles (1980) thought experiment of the Chinese room demonstrates that even if a computer system or robot had consciousness, receiving and issuing strings of icons transformed by abstract rules would not provide any understanding of the meaning of those icons, beyond the purely formal meaning of: if string matches X, issue string Y. Taking Devlin's (2001) definition, that Information = Representation + Interpretation, there is a fundamental problem with the concept of the computer as an information processing system: computers, like Searle's Chinese Room, accept input icons and generate output icons. The understanding of icons as representations, and then to make them meaningful within a context (Devlin's act of interpretation) requires acts of human semiosis. Hence not only is representation problematic for knowledge base and AI systems, the operation of a computer as an information processing system requires contextualization by human interpreters; intrinsically, computers are merely icon transform systems, and it is human semiotic processing that transforms icons into information. A primary implication of this critique for engineering synthetic intelligence is that, as noted by Harnad (1990) and Taddeo and Floridi (2005), an authentic intelligence must be able to autonomically make icons meaningful, and this cannot be achieved by a system that is merely a transformer or syntactic manipulator of icons or icon sequences that lack any other meaning from the viewpoint of the transforming system. The strength of behavioral robotic systems (e.g. Brooks, 1999) is that the icons within their control architectures implement functional relations from input icons to output icons that not only represent those functional associations as mathematical and logical operations, but actually implement, are, those functions by virtue of the architectures within which they are implemented. However, behavioral systems have not yet been shown to be able to engage in

meaningful symbolic behavior. Similarly, connectionist architectures offer the capacity to embed symbols within dense data reduction processes (Harnad, 1990), but they cannot produce those symbols in a plausible way to begin with (Taddeo and Floridi, 2005). More than twenty years after the formulation of these approaches, autonomous systems are still characterized by specialized competence, fragility, and limited high-level capacity.

2. Anachronistic Technology Metaphors, AI and Robotics


Limited progress to date in achieving significant levels of autonomy in artificial agents suggests that there may be misconceptions built into the project of AI and autonomous systems. Overcoming those misconceptions may require examining the assumptions upon which the project is based, and adopting different methods and different assumptions. One perspective for making the assumptions of the AI project more explicit is that of the historical technological and metaphorical context of the project. From such a perspective it may be observed that: humanoid robots are essentially human beings caricatured in the technology of the day, and artificial intelligence is postrenaissance intellectual discourse caricatured in icon processing technologies of the twentieth century. This manifestation of the human (and animal) via the media of available technology can be seen in other historical replications of human beings and animals via technology. Leonardo's robot from around 1495 was a suit of armor animated by a system of internal pulleys and gears (Istituto e Museo di Storia della Scienza, 2011). The generation of robot behavior in the modern age has progressed from Leonardos pulleys and gears to valves and relays, then to transistors, and to integrated circuits. The visual style of robots has evolved from the forms and surfaces of industrial machines, through automobiles and then consumer electronics, to computer game avatars (e.g. in humanoid entertainment robots). The early work of Boole found a path to implement automated calculation in the earliest industrial age vision of a purely mechanical computer (Ifran, 2007), with subsequent technology developments providing the ever small, cheaper and faster electrical (via relays) and then electronic implementations (via vacuum tubes, transistors, and then integrated circuits) of automated calculation that fuelled the explosion of computation as the foundation of the information age. Just as robots model humans in technology, computers have provided a medium for modeling human thought in technology. More than this, the simulation and the simulated became conflated, and computation became understood as the foundation of intelligence. These are, however, caricatures expressed in the technical media of the day. A 'mechanical' or electronic wo/man expresses the desire to realize human or humanlike attributes through acts of engineering, just as artificial intelligence expresses the desire to realize or exceed human or human-like intelligence through acts of

engineering, using available engineering methods and materials and a model of intelligence derived from logo-centric discourses initiated in ancient Greece, but most highly developed (in the Western world) from the Renaissance to the most recent age of rationalistic industrial and post-industrial capitalism. It is, of course, inevitable that we define problems and engineer their solutions in terms of the available tools. However, a broader historical perspective upon AI and autonomous systems suggests that: i) the problems addressed by AI, and the very project of AI, are historically situated; ii) as technology evolves, the creative impulse behind the problematization of AI and the search for solutions may have different conceptual, methodological and technological means available to it; iii) these different means may lead to, or even require, a re-conceptualization of the nature of the problem and the criteria and forms of solutions.

3. Towards a New Science and Engineering of Synthetic Intelligence


Our technologies are evolving beyond the limitations of simulation or caricature using inorganic media. This is occurring with the rapid ongoing development (and in some cases, recent emergence) of biotechnology, molecular science, genetic engineering, nanotechnology and synthetic biology (e.g. see Synthetic Biology Community, 2011). These fields are shifting the scope of engineering from the macroscopic through microscopic to molecular scales, and from inert matter to the multi-level, organized systems of matter that constitute life. It is now possible to engineer, not just simulations of life, but life itself, by design of the molecular materials by which life is realized. While still in its early stages, this movement of engineering into biological and molecular methods and materials implies a radical shift in our conceptualization of AI and robotics. In fact, (bio-)molecular engineering augurs the end of robotics as it is currently understood, as the conceptual dualism between machines as designed artifacts on one hand and life-forms as evolved biochemical systems on the other, breaks down. Robots as mechatronic agents may always exist, but they will come to occupy one end of a continuum, with no clear boundary separating the robotic from synthetic biological life. Moving from the mechatronic extreme to the biological extreme will be a movement from pure mechatronic systems, through mechatronic systems that incorporate biological components, through biomechatronic cyborg systems, to biological system having engineered structure and functionality, to increasingly 'wild' biological systems created by evolution and having no engineered features. An obvious corollary of this development is that artificial intelligence will be superseded by synthetic intelligence. Artificial intelligence as such carries the legacy of machine age computation. The Turing paradigm has been highly successful in the age of machines. But new methods of engineering bring with them the intri-

guing promise of new paradigms of computation. Several new models of computation, such as computers based upon quantum dots or computing with DNA (several examples are presented in Eshaghian-Wilner, 2009) have shown how Turing computation can be achieved using very different substrates. However, cognitivism based upon Turing computation has not led to strong demonstrations of AI, and using the same paradigm of computation realised on a different technical or material substrate is likely to incur the same problems as those discussed above. Instead, it must be asked what fundamentally different paradigms of computation might be realized with, or constituted by, different implementational substrates. In particular, it is now possible to consider the design and engineering of biological intelligence. Examples of the integration of biological neuron cultures with mechatronic systems have been demonstrated, including robots controlled by in vitro neuron cell cultures (e.g. Bakkum et al, 2004, 2007, Warwick et al, 2010). While the functionality of these systems is currently limited, there is very great potential to extend the principle of these systems with more highly differentiated cell culture architectures, and by genetic engineering of neurons and their biological ecologies as part of hybrid systems. Neuron systems have some fundamentally different characteristics from current artificial computers. For example, they are asynchronous, they integrate memory and processing, they are analog, their substrate and biochemical environment has a fundamental influence upon their behavior, they have massive parallelism, have a broad diversity of neural types, and have behavior that is a complex function of multiple timing characteristics (Potter, 2007). Hence biological neuron cultures may provide a foundation not just for new models of computation, but for a radical rethinking of the bases of intelligence away from the computational model. This is not a proposal to develop silicon computers or their software on the model of biological neuron systems, but rather, to develop theories, methods and technologies for realizing engineering objectives directly in the material of biological neuron systems and their bioengineered progeny. The implications of such a program can be profound, both in terms of the development of technology and from perspectives of ethics (e.g. see Warwick, 2010) and fundamental concepts of what we are and the boundaries between ourselves as biological organisms and our technologies as designed artifacts. The same principles can also be carried into the vehicles of biologically founded synthetic intelligence: systems may integrate biologically grown and inorganically synthesized parts, or complete organisms can be engineered. This is not a very novel concept, since human beings have been engineering animal species since the dawn of agriculture. But what is more novel is a shift to using biological engineering to achieve functions of intelligence and useful autonomy that have previously been pursued with limited success as applications of inorganic engineering.

4. Conclusion
This paper has considered a number of serious problems that have limited the spread and effectiveness of artificially intelligent systems, and proposed that these problems may derive from the application of historically specific and transient technological models to the understanding of intelligence during a particular period of technological development. As technology advances to biological and molecular levels, not only our understanding of intelligence, but our ability to synthesize intelligence can be taken to a new level, closing the gaps between the natural and the synthetic and leading to new understandings, not just of intelligence as abstract intellectual competence, but of the nature of sentient agency. A first level of this development may be to replicate existing computation models using molecular or neural substrates. As Conrad and Zauner (2003) note, access to the molecular level is a core problem, and existing models of computation provide a framework for controlling processes on a larger scale that are (currently) impossible to understand at a detailed molecular level. However, this risks losing an opportunity by maintaining the computational model too far beyond the technologies that it has evolved with and is most suited to. Design at the molecular and cellular level requires the design of self-replicating, self-assembling and self-organizing biomolecular elements capable of generating cognizing systems as larger scale assemblies, analogous to the neurobiological system manifesting human cognition. It is not at all clear that a conventional computation model is the best way of describing the essential behavior of such a system. Nevertheless, we do have the existence proof of human intelligence to demonstrate that such a system can indeed manifest the best available examples of cognitive competence. This paper does not attempt to outline a proven alternative. Rather it is a call for the investigation of alternatives. The investigation can and most likely should take the form of direct experimentation in implementing design concepts at molecular and cellular levels, in a bottom-up process from which appropriate abstractions over resulting behavior can be derived. Rather, it emphasizes the design of self-replicating, self-assembling and self-organizing biomolecular elements capable of generating cognizing systems as larger scale assemblies, analogous to the neurobiological system manifesting human cognition. Defining suitable abstractions without a foundation in experimental and data-driven research would be pure speculation.

5. References
Anderson ML (2003) Embodied Cognition: a field guide. Artif Intell 149: 91130. Arkin RC (1998) Behavior-Based Robotics. MIT Press.

10 Bakkum, DJ, Shkolnik, AC, Ben-Ary G, Gamblen P, DeMarse TB. and Potter SM (2004) Removing some 'A' from AI: Embodied Cultured Networks. In: Iida F, Pfeifer R, Steels L and Kuniyoshi Y (eds) Embodied Artificial Intelligence. New York, Springer. Bakkum DJ, Chao ZC, Gamblen P, Ben-Ary G, & Potter SM (2007) Embodying Cultured Networks with a Robotic Drawing Arm. 29th International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon. Brooks RA (1999) Cambrian Intelligence, MIT Press. Buchanan BG and Shortliffe EH (eds.) (1985) Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, MA: Addison-Wesley. Conrad M and Zauner K-P (2003) Conformation-Based Computing: a Rationale and a Recipe. In: Sienko T, Adamatzky A, Rambidi NG and Conrad M (eds) Molecular Computing. MIT Press. Cullen J and Bryman A (1988) The knowledge acquisition bottleneck: Time for reassessment. Expert Sys 5: 216225. Cycorp Inc (2002) http://www.cyc.com/ (accessed July 31, 2011). Devlin K (2001) Info sense. W.H. Freeman and Company, New York. Dey, A., (2000) Providing Architectural Support for Building Context-Aware Applications. Dissertation, Georgia Institute of Technology. Eshaghian-Wilner MM (ed) (2009) Bio-Inspired and Nanoscale Integrated Computing. Wiley. Harnad, S. 1990, The Symbol Grounding Problem. Physica D42, 335-346. Hart PE (1975) Progress on a Computer-Based Consultant. Proc. International Joint Conference on Artificial Intelligence. Vol. 2: 831-841. Tbilisi, USSR. Hayes PJ, Ford KM and Agnew N (1994) On Babies and Bathwater: A Cautionary Tale. AI Mag 15(4): 15-26. Ifran G (2007) The Universal History of Computing: From the Abacus to the Quantum Computer. Wiley. Istituto e Museo di Storia della Scienza (2011) http://w3.impa.br/~jair/e65.html (accessed July 31, 2011). Lenat D and Feigenbaum E (1991) On the thresholds of knowledge. Artif Intell 47: 185-250. Lindley CA (1995) A Postmodern Paradigm of Artificial Intelligence. 2nd World Conference on the Fundamentals of Artificial Intelligence, Paris. Lindsay RK, Buchanan BG, Feigenbaum EA, and Lederberg J (1980) Applications of Artificial Intelligence for Chemical Inference: The DENDRAL Project. New York, NY: McGraw-Hill. Medsker L, Tan M and Turban E (1995) Knowledge acquisition from multiple experts: Problems and issues. Expert Sys. with Applications 9: 35-40.

11

Murphy GL (2002) The Big Book of Concepts. MIT Press. Newell A and Simon HA (1975) Computer Science as Empirical Inquiry: Symbols and Search. CACM 19:113-126. Potter SM (2007) What can Artificial Intelligence get from Neuroscience?. In: Lungarella M Bongard J & Pfeifer R (eds) Artificial Intelligence Festschrift: The next 50 years. Berlin: Springer-Verlag. Schilit B, Adams N, Want R, (1994) Context-Aware Computing Applications. 1st International Workshop on Mobile Computing Systems and Applications. Santa Cruz. Searle J (1980) Minds, Brains and Programs. Behav and Brain Sci 3: 417457, Synthetic Biology Community (2011) http://syntheticbiology.org (accessed July 31, 2011) Taddeo M and Floridi L. (2005). Solving the Symbol Grounding Problem: a Critical Review of Fifteen Years of Research, J Exp Theor Artif Intel 17: 419-445. Thagard P (2010) Stanford Encyclopedia of http://plato.stanford.edu/entries/cognitive-science/ (accessed July 31, 2011) Philosophy.

Warwick K, Xydas D, Nasuto SJ, Becerra VM, Hammond MW, Downes J, Marshall S and Whalley BJ (2010) Controlling a mobile robot with a biological brain. Def Sci J 60: 5-14. Warwick K (2010) Implications and consequences of robots with biological brains. Ethics Information Technology. Springer. doi: 10.1007/s10676-010-9218-6.

You might also like