Skip to main content

Another thought about Turing and Brooks

Rodney Brooks once wrote that robots would be human when treating them as though they were human was the most efficient way of interacting with them. (Not a precise quote.)

This is an interesting variation on the Turing test. It assumes that we decide the smartness of machines in the context of frequent interactions with them. It also builds on an interesting idea: that in order to deal with another entity, be it human, animal or mineral, we naturally build an internal model of the entity: how it behaves, what it can do, how it is likely to react to stimuli etc. That model exists for all entities that we interact with; a rock is not likely to kick you back, your word processor will likely crash before you can save the document etc.

When the most effective way to predict the behavior of a machine is to assume that it has similar internal structure to ourselves, then it will, for all intents and purposes, be human.

So, here is another thought: how do we know that another human is human? Although this sounds flippant, there are many instances where we forget that another person is a real person: soldiers must do this in order to carry out their job; terrorists must de-humanize their enemy in order to justify their atrocities.

I think that we only really recognize another person as human when we can relate personally to them. In most cases, what that means, is recognizing the other person's behavior as symptomatic of something that we ourselves have experienced. In effect, the model building process consists largely of seeing someone's reaction to an event and relating it to something that we ourselves have experienced. (An aside: how often, when told of some event or situation as it affects someone we know, do we react by quoting something from our own past/situation that somehow is analogous?)

At the heart of this phenomenon is something curious: conventionally, the Turing test is phrased in such a way as to decide whether the other entity is human or not. However, it may be more accurate to say that what we do everyday is try to decide if we ourselves could somehow be that other person (or entity) we are interacting with? Furthermore, it may be, that this emphasizing is only possible because fundamentally, we are all clones to 99.99%: we are all running the same operating system in our mind as it were. We can predict the other person's responses because they could be our responses also.

What does this mean? Well, perhaps we need a new formulation of Turing's test: an entity can be considered human if we believed that we would react the way that the entity reacts had we been that entity. Another consequence may be that machines may be smart and intelligent etc. but not human simply because the code that they run is not our code. A cultural difference between people and machines if you will.

Popular posts from this blog

Existential Types are the flip side of generics

Generic types, as can now be seen in all the major programming languages have a flip side that has yet to be widely appreciated: existential types.

Variables whose types are generic may not be modified within a generic function (or class): they can be kept in variables, they can be passed to other functions (provided they too have been supplied to the generic function), but other than that they are opaque. Again, when a generic function (or class) is used, then the actual type binding for the generic must be provided – although that type may also be generic, in which case the enclosing entity must also be generic.

Existential types are often motivated by modules. A module can be seen to be equivalent to a record with its included functions: except that modules also typically encapsulate types too. Abstract data types are a closely related topic that also naturally connect to existential types (there is an old but still very relevant and readable article on the topic Abstract types have …

Concept Oriented Markup

I have long been frustrated with all the different text mark up languages and word processors that I have used. There are many reasons for this; but the biggest issue is that markups (including very powerful ones like TeX) are not targeted at the kind of stuff I write.

Nowadays, it seems archaic to still be thinking in terms of sections and chapters. The world is linked and that applies to the kind of technical writing that I do.

I believe that the issue is fundamental. A concept like "section" is inherently about the structure of a document. But, what I want to focus on are concepts like "example", "definition", and "function type".

A second problem is that, in a complex environment, the range of documentation that is available to an individual reader is actually composed of multiple sources. Javadoc exemplifies this: an individual library may be documented using Javadoc into a single HTML tree. However, most programmers require access to multiple…

Robotic Wisdom

It seems to me that one of the basic questions that haunt AI researchers is 'what have we missed?' Assuming that the goal of AI is to create intelligence with similar performance to natural intelligence; what are the key ingredients to such a capability?

There is an old saw
It takes 10,000 hours to master a skill
There is a lot of truth to that; it effectively amounts to 10 years of more-or-less full-time focus. This has been demonstrated for many fields of activity from learning an instrument, learning a language or learning to program.

But it does not take 10,000 hours to figure out if it is raining outside, and to decide to carry an umbrella. What is the difference?

One informal way of distinguishing the two forms of learning is to categorize one as `muscle memory' and the other as 'declarative memory'. Typically, skills take a lot of practice to acquire, whereas declarative learning is instant. Skills are more permanent too: you tend not to forget a skill; but it is…