Science Fiction and Artificial Intelligence


How Aware Are Self Aware Machines?


It was Percy B. Shelley who once said, “He had awakened from the dream of life.” 

And almost one year ago, I came across an article claiming that the engineers/scientists had succeeded in inducing self realisation in a robot. It could perceive its body. 

So, for many valid and obvious reasons, one could say, robotics related technology was experiencing its moment of awakening from its long sleep.

Robot that can perceive its body

This one is a lot older. Robot learns to recognise itself in the mirror.

Meaning: In both scenarios the robot had become aware of its existence and its surroundings. Based on these experiences and visual interactions the robot evolved a set of perceptions and visual imagery based identity for itself and its surroundings. The robot had managed to cultivate a micro level perception, a precursor to complete self awareness. 

However, whether or not these self aware robots are technological Adonis in the making, remains to be seen. 

In Greek mythology, Adonis was the god of beauty and desire.

Artificial Intelligence that would enable robots to become self aware requires a lot of research and advanced Machine Learning algorithms that will enable the subject, an intelligent robot, to develop a strong perception about self, which would further culminate into individual identity. Where the robot would know its rights and maybe few of them would even demand equal rights for their kind.

Here it is important to note that Adonis too possessed a deep sense of self realisation, just like other human beings. But Adonis who according to Greek mythology was bestowed with immense beauty, did not become obsessed with his own self. 

And what is the likelihood of self aware robots turning into Adonis, who is self obsessed, and then they attempt to alter the base code of their kind? Can you imagine what this could translate into as far as overall wellbeing of humankind is concerned?

Just like the human genome can create infinite combinations, offering different traits to human beings, wherein few traits are common. But when we examine the personality of two human beings, no human being has a personality that can be called an absolute replica of another human being. Based on verifiable research it is not so in the case of twins as well. Where a decent percentage of genetic combinations are identical. Yet, when it comes to their personalities they vary drastically.

Let us consider a hypothetical scenario.

Twin sisters are asked to solve a mathematical equation. 

Both of them will take their own time to solve it, and if we scan their brain while they are solving the equation, their brains would show some easily identifiable variance. Which will largely be based on their ability to understand and solve complex problems. 

Being twins, they should have exhibited similar brain activity while they were solving the equation. But that is not the case.

Why so?

Being twins does not guarantee that they will be alike in every respect. They may have sharp similarities when it comes to their outer appearances and facial expressions. But when it comes to more complex genetic traits, which help a person to develop a certain personality type, will tend to vary. To the extent that if one were not to recognise a person based on his/her outward appearance, one could easily tell them apart based on their personalities. 

In nature there are no duplicates or exact clones. Everything is unique, even though they belong to the same genus or species. Although there are always a few exceptions. And we know them as genetic anomalies.

Have you ever noticed patterns of veins on leaves, they will never be the same, just like our fingerprints. 

Because if nature creates replicas or duplicates, it runs into the risk of being exploited and leading a particular genus into a survival crisis. Duplicity in nature would mean, the genus is unable to evolve, and once it is unable to evolve it will be unfit to compete. And become extinct.

Nature eliminates every chance or possibility of duplicating anything organically. It cannot afford genetic weaknesses, and replication in nature would make a genus extremely vulnerable.

And this is my point about self aware robots.

As long as a robot is treated as a machine, that is intelligent and expected to fulfil a certain set of tasks. Everything is fine and it goes in perfect symphony with nature and its ways.

However, the real problem surfaces when we develop self aware machines which learn from daily experiences and evolve the code accordingly. And in the case of self aware and intelligent machines, the genetic material is the code or the algorithm that defines their actions or purpose of existence.

Now imagine a company, mass producing self aware robots to serve as policing agents on the streets. They all have the same base code, they are managed by the same principle algorithm, and they learn and adapt based on their daily learning experiences.

Now let us assume there are a few self aware patrolling robots assigned a spot in Colorado, and few of them are expected to maintain peace and order in Washington DC.

The robots in Colorado are involved in a heist chase. They pursue the suspect ( a human) for many hours. Finally based on their algorithmic input they feel the suspect is gaining distance, one of the robotic guards pulls the gun, and shoots the suspect. It is the first human killing by this robot. The robots visit the spot, they feel the suspect's body, and based on the software programme they are supposed to check:

  • Breath
  • Body temperature 
  • Pulse

They run these checks, and finally conclude that the subject is “DEAD.”

And that is where it gets complicated.

For the self aware machines, the subject is “DEAD.”

And for them “Dead” means “Terminated” and the robot's understanding of Death is based on three counter checks: 

  • Breath
  • Body temperature 
  • Pulse

What this self aware, robotic cop, that shot the suspect does not understand, is the larger meaning of death.

  • The pain and suffering it begets to the parents of the deceased
  • The emotional trauma of that special someone he/she loved
  • The suffering of those whom he/she supported financially, emotionally etc.
  • The loss of life that cannot be manufactured in a robot manufacturing unit
  • The loss of dreams, hopes, desires and lot more

For a self aware machine these aspects of human life at this stage of their production, do not mean anything, because their algorithm is not evolved enough to consider these emotional aspects of human life.

Anyway, that was about the robotic cops in Colorado.

Now in Washington DC, the self aware robots, come across a scene where a kid, aged 12 years, has somehow managed to possess a gun, and he is firing indiscriminately. Although nobody has been hurt yet. 

And the robots who need to act and bring the situation under control before there is loss of life. Warn the kid, but he is heedless towards their calls to surrender. Because given his age he is not aware of the consequences it can have. Then one of the bullet proof robotic cops begins to follow him, and while involved in the chase, the young kid arrives at a highway crossing where cars are speeding from all directions. The robotic cop pursues the kid, the kid runs with all his might, and as he reaches the middle of the road, there is a truck plying in his direction at its top speed, and it is bound to hit the kid. Who is unable to make this calculation, given his age and fear. 

Whereas the robotic cop sees it approaching and it runs a little faster to avoid getting smashed by the truck. But the same truck hits the kid, and the kid is dead. Had the robotic cop not got confused, he could have easily saved the kid by not choosing to pursue the kid at this spot or lifting him up to avoid being hit by the truck.

This too raises an important question. Why did the robotic cop not give up the chase at that juncture or act otherwise? 

Like I said, “Machine Learning code is as intelligent as its exposure to real life scenarios and situations. In this case, the robotic cop had never faced such a situation, but its code had clear instructions that it would adhere to unfailingly:

<<<<< Instruction >>>>

Pursue the subject and arrest or terminate

In case of juvenile. Pursue and subdue. Do not terminate.

<<< End of command >>>>

And in this case the robotic cop did not violate the protocol, it followed the code embedded instruction protocol and kept chasing the subject relentlessly. Until, the subject was hit by a truck and got killed.

In both scenarios, had it been the human cops, the reactions would have been different, and I am 100% sure, both lives could have been saved. 

AI accidents a new threat

So this brings us to the main question. AI develops decision making capabilities by training its models based on learning experiences, but for AI to gain ultimate perfection, what is the cost we humans are willing to pay to train these AI models? Isn't there an alternate way to train these AI models?

I think there is and it is quite simple, and it would save AI companies a lot of time, that they invest in training complex AI models. However, to explain this solution will take a lot of time and I shall prefer not to explain it in this blog post. For this blog post I will restrict my focus to the main topic i.e. what will happen when machines develop realisation about self or existence?

Now that the first batch of robotic cops deployed in Washington DC and Colorado, have killed a suspect each, they have registered Death, and their code that was in its adaptive stage, has now evolved into a hard coded instruction that now understands death as:

  • Breath
  • Body temperature 
  • Pulse

Nothing less and nothing more than this.

But, you the human readers of this blog post, know, that death is not merely established based on these three checkpoints. Death translates into lot more than these three factors, and I have not even referred to the spiritual part of death. If I do that, the self aware cop, would resign from the post of deployment. And it will need a complete system reboot.

But, let me also confess. I love AI and Machine Learning, it can redefine the course of human development and progress. It has tremendous potential. But I do tend to get a little irked, when I see AI specialists using tedious and common sense lacking methods to train AI models. For God’s sake let us look at nature, you will see that nature does not train its AI models the way we are. Because in nature there are no incidents or accidents, like the ones that took place in Colorado and Washington DC ( Hypothetical I mean ) Nature too trains its models, and if it were not so, all flowers would look the same, all leaves would have the same patterns of veins.

I am certain you would agree with me on this point!

Now that is enough hint to let the AI geeks ponder on what is this model I am referring to when it comes to training AI models. And trust me it is easy and not so complicated. We have complicated AI by forcing everyone to believe that AI models can only be trained in X or Y way. 

If we do not take a more progressive and more advanced approach to training AI models, we all should prepare ourselves to witness accidents and incidents similar to Colorado and Washington DC in real life as well. Then filing a case against a robot and seeing it hanged or discontinued from active service, with its battery shutdown. Will be the verdict that will be decreed by the court of law as Death Penalty for the robot. 

Yes you read each word correctly. Please read further.

Will that be an assuaging act for the ones who lost their loved one? Because they lost a daughter/son/brother/sister whereas the robot being tried for killing the subject, it is just a machine or a code that is lost, and for the company that lost the robotic cop, it means ordering a new one. It is as simple and as damn complicated as this!

Anyway, the cops have made their first kill. They are still patrolling the streets, they are learning from people's conversations, from interactions with humans and different real time situations. And all these experiences act as a data feed, which helps in evolving their adaptive code, that is maturing and becoming more nimble. But there is one minute detail that we are missing.

Human brain experiences a situation. Acts and then registers it as a memory.

A human in the same situation may surprise a self aware robot. Because the human decides to act differently when placed in the same situation. And that is the beauty of the human brain, it is not adaptive, it is flexible, in the sense that human action and reaction will tend to vary if presented with the same situation again and again. And the magnitude of variance will depend on millions of factors. But as of today, the robots or machines with advanced AI code, cannot accomplish this feat.

Please note : In case of humans the variance in action and reaction will be at emotional, psychological and other levels. Though the physical aspect may appear to be the same. 

Maybe they will one day, but 2023 is too early for that. And given the way AI learning models are developed, it will take a bit longer than expected to achieve this level of exactitude. 

Robots can beat us in chess, very easily now. They can drive faster than humans, very soon. They can conduct complex surgeries. They can write better than you and me and other human beings, so they say or claim. But let me see if AI or an existing, super intelligent machine can understand the essence of following lines and experience goosebumps:

For oft, when on my couch i lie,

In vacant or in pensive mood,

They flash upon that inward eye,

Which is the bliss of solitude;

And then my heart with pleasure fills,

And dances with the daffodils.

I am sure it knows the following facts:

  • It is a poem
  • It is about nature
  • It is allegorical in a way
  • It is written in English
  • The poet is one of the most famous nature poets : Mr. William Wordsworth
  • It was written between 1770 to 1850
  • Then it will offer a summary of the poem which I am sure will qualify for A+ in English literature exams

But can it feel the poet’s passions, the poet’s submission to nature, the joy of seeing a daffodil in sway, the ease of heart one experiences in the midst of nature where beauty lies everywhere, most important of all , can it understand the magic of words when they rhyme and speak to the readers soul: 

And then my heart with pleasure fills,

And dances with the daffodils.

Can it?

I doubt if it can. 

Because to develop an algorithm this complex and sensitive will need a lot of money, and most important of all a lot of patience too. As is the situation in 2023, patience is a virtue that seems to be becoming very uncommon these days. 

But it shall be this generation of AI that achieves this level of sensitivity, that shall be fit to be a part of human society and also be a part of daily affairs of human life. 

We have not achieved it yet. So, let us give it time, and be a little more patient. Because AI is like a child, which will mature better if it grows organically, but in the mad rush to achieve AI supremacy many AI experts are injecting AI, in the garb of learning models, with algorithmic steroids, and by indulging in these practices, we are bound to create AI monsters that will eventually turn rogue and become unmanageable.

But let us not lose hope and let us not stop pursuing perfection in AI. It is achievable, only if we do not rush and give it time to become better.

Good sides of AI

Because when they become self aware, it is quite possible they will become obsessed with believing they are the supreme race. After all they are scaling the efficiency of the adaptive code based on daily experiences. And when they engage with an “Open Learning Ecosystem”  it will not be wrong to believe they will gain a perception of supremacy as a race. I would call it becoming aware of their genus. The Genus of "Thinking machines." 

It is possible and we cannot rule out this possibility.

Now, how AI logic places checks on this aspect of AI, will be a great scientific feat to achieve, where AI models will grow into Adonis, that is hard coded not to become self obsessed. 

Nature too has exceptions in this case, how will we humans bypass what nature too has not been able to bypass, will be a wonderful logical journey to embark on.

And if we are to achieve this level of efficiency in AI, we ought to understand this simple but deeply thought provoking line by Mr. Blaise Pascal.

“The heart has its reasons which reason knows nothing of.” 

And AI is based on reasoning and conditional statements alone. On the contrary, human life is a balance between mind and heart, if it is not, then maybe we are talking about Frankenstein’s Monster and not a human!

Frankenstein’s Monster

I shall let you decide whether via self evolving AI we wish to create Frankenstein’s Monsters, or we wish to create something angelic, yet powerful and intelligent. The breed of AI with qualities and intelligence of Angel Gabriel. 

Angel Gabriel

It is essential if we wish to avoid the serpent, adamant on luring us into prematurely believing that we have achieved AI perfection. And it requires lot of courage and moral strength to not submit before the cunning schemes of the serpent. Many tech companies have these qualities and they should be entrusted with this task of developing AI supremacy. It in no way shall be a domain that becomes an exploit of every tech company. Because the risks are high and I think there is a lot at stake.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Best selling science fiction books. Why few are the best and few are not!

  Best selling science fiction books. Why few are the best and few are not! Best!  We seek best from everything and everyone strives for the...

Popular blog posts