Rupam Mahmood
There is a new term around to mean the same thing artificial intelligence (AI) originally used to mean. It is artificial general intelligence (AGI). The term AI has been used and abused so much without fulfilling its original promise of implementing human-level computational intelligence that perhaps a new term was needed to renew our interest. But it is not meant simply to break away from our lethargy and failures associated with the original term. It also suggests that some new insights have been gained. We can imagine that the additional word general suggests what has been missing in the current endeavors in artificial intelligence. As if currently the AI world is too busy focusing on specific intelligence, whereas the original goal requires attention to its general aspects.
If this additional word is a hint to what is missing in the current endeavors, then what does it precisely point to? What are the general aspects of intelligence? The simplistic definition of intelligence—the ability to achieve a complex long-term goal—seems good enough as it is not committed to any specific ability. But it is also useful to point out some general aspects of intelligence to guide us in our endeavors toward the ultimate objective of AI. I find it appropriate to focus on two forms of generality of intelligence:
1) The generality of the underlying mechanism of intelligence, and
2) The generality of the capacity or the skill set constituting intelligent behavior.
These are forms of generality not typically included when we define intelligence but with further thought can be seen as integral to the kind of intelligence we seek to create.
The first form of generality lays out a criterion for our approach to developing AI systems. And it suggests against an engineering approach. Let me elaborate the engineering approach first. An artificial organism, in which we seek to create an AI system, can be endowed with a goal and bodily resources, such as sensors, actuators, and computational machinery, the variety of which in innumerable. An engineering approach to developing AI systems is concerned with the specific endowment of the organism. It might be possible to manually and painstakingly engineer a particular AI system. But by focusing on the specific endowments of the systems, such an engineering approach is bound to applying different mechanisms to different AI systems. This is how problems are solved in the world of engineering and technology. However, it is not quite satisfactory for our objective of developing a computational theory of intelligence. Such an engineering approach is, in fact, antithetical to the objective of AI because this approach lets us, the humans, understand better the different problems each particular AI system is attempting to solve without developing a theory of intelligence.
The theoretical approach seeks to discover a single mechanism behind the goal-seeking behavior of organisms, disregarding the exact specification of their endowments. This seems quite a tall order. How can a single mechanism work for organisms with different bodies and goals since it seems more likely that different organisms would require different minds? But different minds do not necessarily need different mechanisms. For example, the mechanism behind intelligence can be a learning algorithm, which uses data to shape an initial performance unit, the task of which is to perform well in a classification, prediction, clustering, or control task, into another performance unit, presumably with better performance. The objective of the theoretical approach is to discover such a single mechanism which any organism can use to develop a mind that is more fitting to its endowed goal and bodily resource. The theoretical approach aims at developing the minimal structure all minds should possess and with more experience can shape itself into a furnished and specialized mind. When we take an engineering approach, we attempt at developing this specific and final form of mind directly by gathering much of the domain and resource-specific knowledge for the system on our own. This is what we are good at. But in the face of innumerable forms of future artificial organisms, it is the engineering approach, not the theoretical approach, that appears to be a tall order.
It might seem natural to an AI researcher that a reasonable candidate for achieving this form of generality would be through a scalable learning method. The learning method behind the mechanism of intelligence should be scalable in the sense that it must work across different organisms, which is essential to achieve the sought generality. There is another form of scalability which is also essential for a singular mechanism of intelligence: scalability with data. As the desired mechanism of intelligence is fundamentally reliant on data to shape the mind, learning continually—on a lifelong basis—here is not a special interest but rather should be expected as the standard case. And for a learning mechanism to work essentially forever without any form of human maintenance or calibration, it must scale with an infinite stream of data.
In a nutshell, the theoretical approach to AI seeks a scalable learning mechanism.
The second form of generality perhaps requires lesser convincing. It points to the fact that achieving complex long-term goals are associated with the ability to know or do more than one thing—in fact, many many things. A goal is not complex or long-term enough if it requires knowing only a few things. But knowing or doing many things is a projection of something more fundamental. To achieve a complex long-term goal, an agent requires a good model of the world, which entails having a good understanding of the environment’s dynamics. Therefore, to do well in its original goal, an organism would seek to gather a vast amount of knowledge of the world. Ability to know and do a wide and general set of things regarding the surrounding world thus has always been seen as integral to intelligent behavior.
Whether or not we use this novel term—AGI—it has reminded us that our endeavors have moved away from the general aspects of intelligence. At this moment, it seems we can gain a lot by focusing on the generality of the mechanism behind intelligence and allowing the artificial organisms to be broadly capable of doing many things by letting it know a lot about its world.