Friday, August 18, 2006

The Zeroeth Law of Humanics - I

Popularly known as Asimov's laws, though he was neither the first one to think of them nor the first to formulate them in these words, the three laws of robotics are a very good starting point for a methodical discussion on human behaviour. I'll do you, the reader, the disservice of writing down the laws again. I must crave your indulgence, for the manner in which i shall i write them, even though it is not a personal idiosyncracy that dictates that the laws be written down in this order. But, I'm getting ahead of myself.

The third law of robotics: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

The second Law of robotics: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

The first law of robotics: A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

(One very important thing to look at in these definitions is that they are very precisely stated. One feels obliged to point out the use of the word "may" to imply permission and not "possibility". Thumb firmly on the nose; little finger steadfast in pointing outwards and upwards; the other fingers waving slightly out of sync with each other about their mean position! There MAY be only one acceptable definition for the word "MAY" when not used as a noun.)

These rules are not much as much a set of limitations for a state of mind as guidelines for a way of life.

Think of everything as a heirarchy of systems. And, please do not think of this a moral judgement. I do not make moral judgements based on my moral code.

Every system manages to make its subsystems comply. This compliance can be forced, voluntary or a mixture of both. Dictatorship works on forced compliance. Familial systems work on a mixture of authority, blind faith and voluntary compliance. The most unpredictable systems work purely on voluntary compliance, because there is no form of coercion that is imposed to ensure compliance of subsystmes.
Sucessful systems are those that manage to keep a high degree of compliance or in the very least manage to achieve their goals with whatever degree of compliance that they manage to get.

(This is essentially the import of the second law of robotics)

Each system aims for self-preservation. A system that kills off its subsystems without being able to replace them, will not manage to retain enough subsystems under it. No system can survive without subsystems. Every system hence has to have as one of its rules, the need to ensure the safety and welfare of its subsystems. This is where "free will" comes in. Every subsystem must be given to make the choice as to what it really wants to do.

(This is the import of the first law.)

The exercise of such a choice might be censored to various degrees, depending on the nature of the system and the subsystems. The system tells its subsystems that free will is acceptable as long as the choice does not threaten the existence of the system. The system has as one of its rules, the possibility of disassociating from itself any subsystem that threatens the system. This disassociation might be in the form of termination of the existence of a subsystem or merely the cutting off of diplomatic relations.

(This is the third law.)

These three laws help to maintain a system. Most systems we use, know or see that involve any entity that can express "free will" survive because they follow these laws.

The next post will do you the disservice of an explanation, by bringing in the Zeroeth Law of Humanics...

2 Comments:

Anonymous Anonymous said...

This comment has been removed by a blog administrator.

12:23 pm  
Blogger Sriharsha Salagrama said...

I like the comments on my post to bear some connection to the post.. so your comment gets deleted witandwisdumb.. :)
This is no coincidence!

11:56 am  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home