It seems to me that one of the basic questions that haunt AI researchers is 'what have we missed?' Assuming that the goal of AI is to create intelligence with similar performance to natural intelligence; what are the key ingredients to such a capability?
There is an old saw
There is a lot of truth to that; it effectively amounts to 10 years of more-or-less full-time focus. This has been demonstrated for many fields of activity from learning an instrument, learning a language or learning to program.
But it does not take 10,000 hours to figure out if it is raining outside, and to decide to carry an umbrella. What is the difference?
One informal way of distinguishing the two forms of learning is to categorize one as `muscle memory' and the other as 'declarative memory'. Typically, skills take a lot of practice to acquire, whereas declarative learning is instant. Skills are more permanent too: you tend not to forget a skill; but it is easy to forget where one left one's keys.
Another way of viewing the difference between skills and declarative knowledge is that skills are oriented towards action and declarative knowledge is oriented towards reflection and analysis. Deciding to carry an umbrella depends on being able to ruminate on the observed world; it has little to do with skills (normally).
Today, most machine learning is based on techniques that have more in common with skill learning than with declarative learning.
Anyway, let us assume that both forms of learning are important. Is that enough for high performance? One factor that is obviously also necessary is action.
The issues with action are complementary to those of learning: there are many possible actions that an agent can perform but most of those are either useless or have negative consequences. Early robots did not perform very well because researchers believed that the same mechanisms needed to plan were also needed to act. That is the moral equivalent of planning the movements of one's leg muscles in order to walk to the front door.
It think that is may be useful to think that emotions are a mechanism that help animals and people to act. (This is not an original idea of course.) In this view, emotions drive goals; which in turn drive the actions that animals and people perform. The connection is direct; in much the same way that skills are directly encoded in muscle.
For our purposes the exact basis of emotions is not relevant. However, the field of affective computing has used a bio-chemical response/decay model for modeling how emotions arise and fade.
What then, is wisdom. If emotions provide a way of rapidly and fluidly motivating action, the declarative dimension accounts for reflection on emotions: in the same way that declarative memory allows for reflection on perception.
It seems to me that, if this is right, we should be able to build a wise robot: by allowing it to reflect on its emotions. For example, a robot might decide that acting too quickly when it encounters a threat situation may not always be conducive to its own survival; much in the same way that it concludes that it is raining when it gets wet.
There is an old saw
It takes 10,000 hours to master a skill
There is a lot of truth to that; it effectively amounts to 10 years of more-or-less full-time focus. This has been demonstrated for many fields of activity from learning an instrument, learning a language or learning to program.
But it does not take 10,000 hours to figure out if it is raining outside, and to decide to carry an umbrella. What is the difference?
One informal way of distinguishing the two forms of learning is to categorize one as `muscle memory' and the other as 'declarative memory'. Typically, skills take a lot of practice to acquire, whereas declarative learning is instant. Skills are more permanent too: you tend not to forget a skill; but it is easy to forget where one left one's keys.
Another way of viewing the difference between skills and declarative knowledge is that skills are oriented towards action and declarative knowledge is oriented towards reflection and analysis. Deciding to carry an umbrella depends on being able to ruminate on the observed world; it has little to do with skills (normally).
Today, most machine learning is based on techniques that have more in common with skill learning than with declarative learning.
Anyway, let us assume that both forms of learning are important. Is that enough for high performance? One factor that is obviously also necessary is action.
The issues with action are complementary to those of learning: there are many possible actions that an agent can perform but most of those are either useless or have negative consequences. Early robots did not perform very well because researchers believed that the same mechanisms needed to plan were also needed to act. That is the moral equivalent of planning the movements of one's leg muscles in order to walk to the front door.
It think that is may be useful to think that emotions are a mechanism that help animals and people to act. (This is not an original idea of course.) In this view, emotions drive goals; which in turn drive the actions that animals and people perform. The connection is direct; in much the same way that skills are directly encoded in muscle.
For our purposes the exact basis of emotions is not relevant. However, the field of affective computing has used a bio-chemical response/decay model for modeling how emotions arise and fade.
What then, is wisdom. If emotions provide a way of rapidly and fluidly motivating action, the declarative dimension accounts for reflection on emotions: in the same way that declarative memory allows for reflection on perception.
It seems to me that, if this is right, we should be able to build a wise robot: by allowing it to reflect on its emotions. For example, a robot might decide that acting too quickly when it encounters a threat situation may not always be conducive to its own survival; much in the same way that it concludes that it is raining when it gets wet.