The Self -Aware Artificial Intelligence

Aaditya Verma
10 min readDec 28, 2020

Artificial Intelligence gained fame back in the 1980s when a form of AI program called Expert Systems was adopted by corporations around the world. Over the course of time, as we advanced in the field of Artificial Intelligence, scientists started dividing AI into Machine Learning, Neural Networks, Robotics, Fuzzy Logic, etc. But basically, AI is being classified into four types one of which is “The Self-Aware Artificial Intelligence”.

A Symbolics Lisp Machine.

Growing up as Marvel fans we’ve all wondered if J.A.R.V.I.S. or Iron Man could ever really exist? Fortunately yes, but for that to happen we would need more advanced logic and hardware. In fact, the founder of Facebook, Mark Zuckerberg does have a Home AI system and it can pretty much perform most of the tasks, but it is nothing as near to the J.A.R.V.I.S. we’ve seen in Avengers. Currently one of the most prominent AI programs in use right now is IBM Watson.

IBM Watson.

Since computers are much more accurate and quick-witted to perform a task, it would be a real game-changing technology.
The introduction of such tech could entirely change the way we function, and not to mention the medical field would have the most impact with the introduction of Self-aware AI.

Present Situation of AI

BIRL

If we were to consider the current situation of AI we surely have achieved optimal success. Take Bayesian Inverse Reinforcement Learning (BIRL) into consideration. It learns an agent’s objectives, values, or rewards by scrutinizing its behavior and attempts to compute full policies or schemes in advance. The new model was meticulous and accurate 75 percent of the time in inferring goals. And if we take a look at our lives, the most common example of Self-Aware AI would be Siri, Alexa, Bixby, etc. Research has never been stopped and in-fact we are attaining positive results to date.

What is the target audience?

Well, there is no limit to that. Starting from autonomous vehicles to Digital assistants and Maps and even many more products that can use AI in some form will comfort our lives. Some of them are currently our daily routine software and provides us with nearly accurate results. The introduction of more intelligent machines would reduce the workload and increase the accuracy of the respective task. Last but not the least, if you ever feel lonely, you would always have a digital assistant on your phone or any other device with whom you can crack jokes and have a conversation without having a feeling that you’re talking with a robot.

Idea!

“To replicate ourselves, we first have to embrace human error.

~Hugh Howey

It is obvious that for a machine to think and analyze and most importantly, respond like us, we would need advanced logic with the appropriate hardware. So without further ado, let’s get into it.

THE BLUEPRINT for a self-conscious machine:

  1. Let us consider a physical body or structure that responds to outside stimuli. Not a problem, as we’re already building these.
  2. A language engine. (It can be as linguistically shrewd as IBM’s Watson.)
  3. The third component is bit more of unusual, and I don’t know why would anyone build it except to reproduce evolution’s bungled mess. This final component is a separate thing. It observes the rest of the body and makes up stories about what it is doing — stories that are usually wrong.

Awareness

Self Awareness

Applying what we know about the Theory of Mind and disconnected modules, the first thing we would build is an awareness program. These awareness programs are quite simple and already exist in bulk. Using the technology currently available, we decide that our first machine will be very much like a self-driving car. For many years, the biggest restriction in achieving truly autonomous vehicles has been the awareness apparatus: the sensors that let the vehicle know what’s going on around it. Enormous progress in this field has provided the glimpse and hearing that our machine will employ.
With these basic senses, we then use machine learning algorithms to build a repository of behaviors for our AI car to learn. Unlike the direction most autonomous vehicle research is going — where engineers want to teach their car how to perform certain tasks safely — our team will instead be teaching an array of sensors all over a city grid to watch other cars and guess what they’re doing. That red van is pulling into a gas station because “it needs power.” That car is wasted. That one can’t see very well. That other one has slow reaction speeds. That one is full of adrenaline.

Memory

Conscious events interact with the memory systems in learning, rehearsal, and extraction. The IDA model explains the role of consciousness in the updating of perceptual memory, temporary episodic memory, and procedural memory. Temporary episodic and declarative memories have distributed representations in IDA, there are shreds of evidence that this is also the case in nervous system. In IDA, these two memories are applied computationally using a modified version of Kanerva’s Sparse distributed memory architecture.

Learning

Learning.

Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to speak for and adapt to novel and significant events. It is said to be the set of phylogenetically advanced adaptation processes.

Anticipation

Anticipation.

Anticipation.
The ability to predict the upcoming events is considered important for AC. The drafts principle proposed in researches may be useful for prediction. Anticipation includes a prediction of consequences of one’s proposed actions and as well as of other entities.
Relationships between real-world state structure of a conscious organism enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly when they occur or to take preemptive action to avert anticipated events. The inference here is that the machine needs real-time components that build dynamic, statistical, functional models to show that it possesses artificial consciousness. To achieve this, a conscious machine would be making predictions and contingency plans, not just in real worlds but with fixed rules like a chessboard or maybe even for a novel environment that is subjected to change.

Subjective experience

Subjective experiences or qualia are considered to be one of the hard problems of consciousness. As a matter of fact, it has held to pose a challenge to physicalism. On the other hand, there are problems in other fields of science that limit that which we can observe, such as the uncertainty principle in physics, which have not made the research in these fields of science impossible until now.

Now after we are done creating the Super AI model, it would have to undergo testing. Have you ever heard about the Turing Test? Well, it is the most well-known method for testing machine intelligence. But this test contradicts the philosophy of science principles of theory dependence of observations. Other tests, such as ConsScale they test the presence of features that are inspired by biological systems. Qualia is a first-person phenomenon. But different systems may display different kinds of behavior correlated with functional consciousness. Due to these facts or say reasons there is no empirical definition of consciousness. It is a test of the presence of consciousness in AC that may be impossible for the time being. In 2014, Victor Argonov suggested a non-Turing test for machine consciousness based on the machine’s ability to produce philosophical judgments. He argues that a machine should strictly be regarded as conscious if it is able to provide judgments on all problematic properties of consciousness (such as qualia or binding) having no innate philosophical knowledge on these issues and no informational models of other creatures in its memory. However, this test cannot be used to refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For instance, the absence of philosophical judgments may be caused by the lack of the machine’s intellect, not by the absence of consciousness.

For us to create a Self-Aware AI, primarily we need to analyze human behavior, it’s the ability to respond to various situations, human urge, our learning system, etc. After we are done with our researches and are confident with the shreds of evidence about human consciousness, we would be developing an algorithm. An algorithm is what our AI model will use to learn from its surroundings and take accurate decisions.

Budget!!

There was an article about a new brain-inspired chip on the BBC. For those of you who are not familiar with it, it suggests new potentially cost-effective ways of executing heavy data /compute tasks. An intriguing fact is that they use “neurons” and “synapses” as a way to describe the chip. I found it interesting as we can use humans as a reference for what we might want to achieve.

Wikipedia suggests that the human brain holds about 86 billion neurons. For those who don’t know what neurons are, you can always google it. You will be surprised to know that each neuron has on average of about 7,000 synaptic connections to other neurons.

The fantasy of making Amy self-aware:

We would require, RAM =86,000,000,000 * 7,000 multiplied by 8 bytes which are 4.8 Petabytes.

This will cost around $2.8 per hour.

Now Talking about money:

(4.8 Petabytes / 244 Gigabytes) * 24h * 30d * $2.8

= $39,659,016 per month. And if we calculate the yearly expenditure it is $476 Million.

But that’s not all, here we are building a much more complex and bigger AI system. The research part would require a lot of funding. And after all the years of hard work and sweat, we would finally be on the verge of creating the most intelligent, bodacious, perspicacious, quick-witted AI model which will be no less than our fictional J.A.R.V.I.S. or might be far superior than J.A.R.V.I.S. as if we haven’t discovered the full power of J.A.R.V.I.S.

At last, we will be investing in hiring engineers which will code our AI model using the powerful algorithms that would have been developed over time. And there will be many other small expenditures that I cannot come up with right now.

Cost of product in the market?

Well, that’s a good question because it’s impossible to say anything about it right now. After we’ve created our Super AI model, we can’t just release it in its raw form in the market. There would be a couple of changes that would be required to make it stable for its release in the market. And all of this depends upon the purpose of the product that we’re using it for.
For instance, we would like to release our AI system in the Medical field spitting out cures for cancer, HIV, etc. And on the other hand, we are introducing it in factories or electronic devices. These three things are different from each other and are needed to be developed accordingly.
The assumption is it would cost the most in the medical fields. The cost would be entirely dependent on the purpose regarding which the system has to be used.

Lets talk about our Marketing Plan.

With our Self-Aware AI, we would first start finding cures to diseases, it would be available for researches in the fields of Science primarily, spitting out cures for most of the diseases. We humans never stop learning, so why stop there? The AI model will be available to NASA or other similar agencies for human development. Now coming to our common lives, autonomous vehicles do exist currently but they will be much cheaper and advanced with our AI model, and much more. Who knows what discoveries would be made over time with the help of our Self-Aware AI. Maybe we will be able to stream music directly into our minds which is what the theories suggested that Elon Must is currently working on.

Analyzing risk.

Might be possible that all our research would go in vain. Or what’s worse than not be able to develop an algorithm after years of research or maybe we run out of funds to continue the research any further?
To be honest that would be disastrous if one of these nightmares come true. This would completely wipe off the idea of a Self-Aware AI System. We cannot just keep spending our resources if researches show no output. But fortunately, this is not the case in the present situation of research in the field of AI. Talking about the future, we might run into some problems which can completely sweep the idea of such AI off our minds.
Let’s talk about the time that we would be needing to launch a Self-Aware AI
As I’ve mentioned already researches never stops. Once we can figure out what makes us humans, Consciousness, Awareness, etc. Which would take years. Again, after we’ve known about ourselves, we would be developing an algorithm that would take months or maybe years. It all depends upon the complexity of the algorithms. No doubt they are complex, but it all depends on our approach. Will it make the development of the algorithm more complex or easier than ever?

--

--