Skip to main content

Robotic Wisdom

It seems to me that one of the basic questions that haunt AI researchers is 'what have we missed?' Assuming that the goal of AI is to create intelligence with similar performance to natural intelligence; what are the key ingredients to such a capability?

There is an old saw
It takes 10,000 hours to master a skill

There is a lot of truth to that; it effectively amounts to 10 years of more-or-less full-time focus. This has been demonstrated for many fields of activity from learning an instrument, learning a language or learning to program.

But it does not take 10,000 hours to figure out if it is raining outside, and to decide to carry an umbrella. What is the difference?

One informal way of distinguishing the two forms of learning is to categorize one as `muscle memory' and the other as 'declarative memory'. Typically, skills take a lot of practice to acquire, whereas declarative learning is instant. Skills are more permanent too: you tend not to forget a skill; but it is easy to forget where one left one's keys.

Another way of viewing the difference between skills and declarative knowledge is that skills are oriented towards action and declarative knowledge is oriented towards reflection and analysis. Deciding to carry an umbrella depends on being able to ruminate on the observed world; it has little to do with skills (normally).

Today, most machine learning is based on techniques that have more in common with skill learning than with declarative learning.

Anyway, let us assume that both forms of learning are important. Is that enough for high performance? One factor that is obviously also necessary is action.

The issues with action are complementary to those of learning: there are many possible actions that an agent can perform but most of those are either useless or have negative consequences. Early robots did not perform very well because researchers believed that the same mechanisms needed to plan were also needed to act. That is the moral equivalent of planning the movements of one's leg muscles in order to walk to the front door.

It think that is may be useful to think that emotions are a mechanism that help animals and people to act. (This is not an original idea of course.) In this view, emotions drive goals; which in turn drive the actions that animals and people perform. The connection is direct; in much the same way that skills are directly encoded in muscle.

For our purposes the exact basis of emotions is not relevant. However, the field of affective computing has used a bio-chemical response/decay model for modeling how emotions arise and fade.

What then, is wisdom. If emotions provide a way of rapidly and fluidly motivating action, the declarative dimension accounts for reflection on emotions: in the same way that declarative memory allows for reflection on perception.

It seems to me that, if this is right, we should be able to build a wise robot: by allowing it to reflect on its emotions. For example, a robot might decide that acting too quickly when it encounters a threat situation may not always be conducive to its own survival; much in the same way that it concludes that it is raining when it gets wet.

Popular posts from this blog

Comments Should be Meaningless

This is something of a counterintuitive idea: Comments should be meaningless What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least). Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning because the programming language cannot So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer. I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous. So, if a language were able to capture all of my intentions, I would never need to write a comment. Hence the title of this blog.

Another thought about Turing and Brooks

Rodney Brooks once wrote that robots would be human when treating them as though they were human was the most efficient way of interacting with them. (Not a precise quote.) This is an interesting variation on the Turing test. It assumes that we decide the smartness of machines in the context of frequent interactions with them. It also builds on an interesting idea: that in order to deal with another entity, be it human, animal or mineral, we naturally build an internal model of the entity: how it behaves, what it can do, how it is likely to react to stimuli etc. That model exists for all entities that we interact with; a rock is not likely to kick you back, your word processor will likely crash before you can save the document etc. When the most effective way to predict the behavior of a machine is to assume that it has similar internal structure to ourselves, then it will, for all intents and purposes, be human. So, here is another thought: how do we know that another human is human?

What is an Ontology for?

I am sure that everyone who has ever dabbled in the area of Ontology has been asked that question. Personally, I have never heard a truly convincing response; even though I strongly feel that Ontologies are quite important. I recently listened to a radio segment in which someone in Algeria (I think) was complaining about the new law that required all teaching to be done in Arabic. It seems that most university-level education is in French, and that many parents try to send their kids to schools that teach in French. The issue was that Arabic simply does not have the vocabulary demanded by a modern high-tech education. Arabic is not alone in this dilemma: French itself is littered with Les mots Anglais; and English is a true hodge-podge of Anglo-Saxon, French, German, Hindu, Japanese, and many other languages. It often happens that when a culture acquires a set of concepts, it does so in the language of the originators of those concepts. It is often considerably easier to import wholes