Skip to main content

Another thought about Turing and Brooks

Rodney Brooks once wrote that robots would be human when treating them as though they were human was the most efficient way of interacting with them. (Not a precise quote.)

This is an interesting variation on the Turing test. It assumes that we decide the smartness of machines in the context of frequent interactions with them. It also builds on an interesting idea: that in order to deal with another entity, be it human, animal or mineral, we naturally build an internal model of the entity: how it behaves, what it can do, how it is likely to react to stimuli etc. That model exists for all entities that we interact with; a rock is not likely to kick you back, your word processor will likely crash before you can save the document etc.

When the most effective way to predict the behavior of a machine is to assume that it has similar internal structure to ourselves, then it will, for all intents and purposes, be human.

So, here is another thought: how do we know that another human is human? Although this sounds flippant, there are many instances where we forget that another person is a real person: soldiers must do this in order to carry out their job; terrorists must de-humanize their enemy in order to justify their atrocities.

I think that we only really recognize another person as human when we can relate personally to them. In most cases, what that means, is recognizing the other person's behavior as symptomatic of something that we ourselves have experienced. In effect, the model building process consists largely of seeing someone's reaction to an event and relating it to something that we ourselves have experienced. (An aside: how often, when told of some event or situation as it affects someone we know, do we react by quoting something from our own past/situation that somehow is analogous?)

At the heart of this phenomenon is something curious: conventionally, the Turing test is phrased in such a way as to decide whether the other entity is human or not. However, it may be more accurate to say that what we do everyday is try to decide if we ourselves could somehow be that other person (or entity) we are interacting with? Furthermore, it may be, that this emphasizing is only possible because fundamentally, we are all clones to 99.99%: we are all running the same operating system in our mind as it were. We can predict the other person's responses because they could be our responses also.

What does this mean? Well, perhaps we need a new formulation of Turing's test: an entity can be considered human if we believed that we would react the way that the entity reacts had we been that entity. Another consequence may be that machines may be smart and intelligent etc. but not human simply because the code that they run is not our code. A cultural difference between people and machines if you will.

Popular posts from this blog

Comments Should be Meaningless

This is something of a counterintuitive idea: Comments should be meaningless What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least). Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning because the programming language cannot So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer. I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous. So, if a language were able to capture all of my intentions, I would never need to write a comment. Hence the title of this blog.

Giving the customers what they want

I do not believe that I am an elitist , but at the same time, I wonder about that phrase. To me, it implies an abdication of responsibility. Which is better: to give the customer what he asks for or to solve the real problem? Here is what I mean. Occasionally, someone asks me for some tool/gadget/software program that strikes me as not really addressing the problem. This can be for any number of reasons; the customer has an immediate pain point and wants to address the specific requirement, the customer is already fixated on the technology and want that solution, the customer has been told that the answer is SOAP (and what was the question?). As a professional, that puts me in a dilemma: either I end up arguing with the customer or I hold my nose and give him what he so plainly wants even if I think that it is not the right answer. Given my temperament, it means that I usually end up contradicting the client and thereby losing the deal. Today I ended up doing that (I think, it may be

Minimum Viable Product

When was the last time you complained about the food in a restaurant? I thought so. Most people will complain if they are offended by the quality or service; but if the food and/or service is just underwhelming then they won't complain, they will simply not return to the restaurant. The same applies to software products, or to products of any kind. You will only get negative feedback from customers if they care enough to make the effort. In the meantime you are both losing out on opportunities and failing your core professional obligation. Minimum Viable Product speaks to a desire to make your customers design your product for you. But, to me, it represents a combination of an implicit insult and negligence. The insult is implicit in the term minimum. The image is one of laziness and contempt: just throw some mud on the wall and see if it sticks. Who cares about whether it meets a real need, or whether the customer is actually served. The negligence is more subtle but, in the end,