Rodney Brooks once wrote that robots would be human when treating them as though they were human was the most efficient way of interacting with them. (Not a precise quote.) This is an interesting variation on the Turing test. It assumes that we decide the smartness of machines in the context of frequent interactions with them. It also builds on an interesting idea: that in order to deal with another entity, be it human, animal or mineral, we naturally build an internal model of the entity: how it behaves, what it can do, how it is likely to react to stimuli etc. That model exists for all entities that we interact with; a rock is not likely to kick you back, your word processor will likely crash before you can save the document etc. When the most effective way to predict the behavior of a machine is to assume that it has similar internal structure to ourselves, then it will, for all intents and purposes, be human. So, here is another thought: how do we know that another human is human?...
A sporadic series of essays on things that interest me. Mostly about programming in one form or another.