Dr. Paul Rosenbloom presenting at the USC CAIS seminar on April 4.
Dr. Paul S. Rosenbloom, a professor of computer science at USC, presented a high-level chronology of artificial intelligence. Having conducted research in AI for decades, he has both witnessed and shaped the evolution of the field. Motivating the conversation with a desire to understand intelligence and the mind in general, Dr. Rosenbloom illustrated the interplay between research in cognitive sciences and artificial systems through his own work. He then went on to describe five distinct eras in AI, and followed up with insights into what the near future of the field is likely to bring, noting the advances needed to get us there. Dr. Rosenbloom finished by leading a discussion of consciousness and social issues relevant to AI.
Initially, Dr. Rosenbloom gave an overview of his work on Sigma, an interdisciplinary and general cognitive architecture that takes a ground-up approach to AI. The overall goal of the project was to build a unified, elegant model exhibiting generic cognition, while still using computationally efficient methods. He described sigma as demonstrating a wide array of capabilities, from language and vision to emotion, attention, and self-reflection. Internally, Sigma was described as an intermingling of factor graphs with other state of the art algorithms handling various external signals. Dr. Rosenbloom explained that Sigma was more than connecting input of one module to output of another, but reformulating and learning from new models in a unified manner so that Sigma exhibits the same functionality as any individual model. This general cognitive framework will then be trained and evaluated on virtual humans and will be able to provide counseling services, teach across cultures, and help people combat speech anxiety, among other nuanced tasks. Ultimately, a clean unified model of cognition can be used to push AI past self-driving cars and playing games, and into the world of higher-level human capabilities.
Dr. Rosenbloom went on to describe five eras in AI. Starting in the mid to late 1950s, researchers predominantly worked on search-related problems. Such problems were motivated by the idea that intelligence is all about achieving goals and so the researchers set out to devise efficient methods of searching for reasonable solutions to achieve our goals. However, Dr. Rosenbloom explained that the combinatorial explosion in search space inherent to many problems slowed down progress in this era. Dr. Rosenbloom’s second era revolved around knowledge and representation of expert knowledge encoded as rules. These formal rules could then be put together to make decisions in domains requiring expert knowledge such as law, medicine, or car repair. This era was then overtaken by work in logic with the development of computational theorem provers, and general inference. The rigidity of these systems eventually gave way to the next era in probabilistic reasoning which allowed computers to make decisions involving uncertainty using computationally efficient decomposition methods. Dr. Rosenbloom then arrived at the current and fifth era of machine learning, describing a resurgence of older previously abandoned models that now had the computing power and large datasets necessary for good performance. In a following discussion, he noted that these eras followed a boom and bust cycle. Such a cycle starts when someone uncovers a new set of technologies that can solve a previously unsolvable problem. The technologies are so extensible that people cannot see the limitations in the approach and rapidly develop new ideas. The bubble finally bursts when we come to understand the inherent limitations in the approach.
The concluding discussion covered ethical issues surrounding AI both with respect to societal effects resulting from its implementation, as well as considering the rights of artificially cognitive systems. However, Dr. Rosenbloom differentiated his approach to ethics in AI from the general approach of ensuring the safety of the system, stating that while we can ensure the safety of tools, it is much harder to categorize the safety of humans or other intelligent systems. While we can build models of behavior, and encourage good actions, we cannot provably prevent bad human actions. He suggested that we move towards notions of ethics and responsibility for general cognitive systems, and that we may need to consider cases where an emotional and empathetic AI may not want to be powered off. Finally, he proposed that we consider at what point such systems become conscious and explained that many building blocks of consciousness, such as reflection and self-awareness, are already here to stay.