Ex Machina: Consciousness Is More Than Complicated Instructions
A critique of the contemporary perception of artificial consciousness
The movie Ex Machina port a typical example of our contemporary culture’s unrealistic fear of consciousness apart from ourselves. Artificial consciousness through computer code alone either cannot exist or already does and here’s why.
Ex Machina (2015): As a movie centers around a young programmer who wins a programming challenge and is sent to a secluded facility to meet a pioneering visionary of his time who wants help to put his newest invention, an artificially conscious machine through his own modified Turing Test(A test Invented by Alan Turing the inventor of the computer to determine whether a computer could pass as “human”)
What we mean by “artificial consciousness” is widely disputed even in the context of human consciousness let alone the artificial one. David Gamez said in his paper on the measurement of consciousness: “there is little agreement about the nature of consciousness and it can be argued that our theories have failed to advance much beyond Descartes” (2014). For this reason, it becomes imperative that we agree upon the definitions prior to discussion so we all have an understanding of what we are talking about. Andrei S. Monin says in his paper on the definition of the concepts thinking, consciousness, and conscience “Consciousness is a process of realization by the thinking CS” (complex system) “of some set of algorithms consisting of the comparison of its knowledge, intentions, decisions, and actions with reality-i.e., with accumulated and continuously received internal and external information.” (1992)Talking about Artificial intelligence and artificial consciousness it appears we are talking about much the same thing.
Michael Negnevitsky said in his book Artificial Intelligence: A Guide to Intelligent Systems “We can define intelligence as ‘the ability to learn and understand, to solve problems, and to make decisions’” (2005) Artificial intelligence would be an artificial version of this. A machine that can learn and understand not to mention solve problems and the same would go for artificial consciousness except that conciseness traditionally has more to do with perception and intelligence with problem solving never the less even these two traits are often the same.
Learning, understanding, problem solving, and decisions making are all important traits no doubt but what we really want to narrow it down to is the last feature Michal Negnevitsky mentioned, we are interested in knowing how we make decisions. What it all comes down to in this issue for us is the question: how does the entity in question make decisions and what does that tell us.
This should be our first question in meeting with Ava, the supposedly artificially intelligent machine that enters the movie. Not necessarily, whether or not Ava understands, or whether she learns, or whether she perceives but how she makes decisions. Take note for instance that an infant would still be conscious and indeed intelligent just as long as it still lives to make decisions. It is however, less important whether or not the baby really understand or even if he or she is any sort of capable of solving problems yet. Furthermore, in meeting with a brain damaged person that has lost the ability to learn new things one would still conclude that he was conscious and intelligent exactly because despite his inability to learn, he still makes decisions and his decisions are his own. It is not until he stops making decisions or actions that one would consider him dead, hollow, or otherwise unhuman, or lacking of sentience. Take into consideration also that this is not linked to action specifically. For instance one could animate his dead body with strings and levers but we would still conclude one was unconscious because we knew that even though it appears that one makes decisions and actions we can tell it is not in fact one that makes them but rather someone else behind one pulling the strings. Or even more complex a scenario would be to imagine that one could send electrical input to his brain making him stand up, but even now we can all recognize that he is not intelligent or conscious even if it is he that is making the action, precisely because it was not he that made the decision to make the action, it was involuntary, it was by all means a way of circumventing the brain and taking direct control of his limbs. He is still dead and incapable of any measure of sentience. Therefore, we will narrow down consciousness and intelligence to specifically the capability to make one’s own decisions.
In recent years, a number of movies have come out either centering around artificial intelligence or heavily featuring it in some way. Taking the movie Ex Machina as a typical example of contemporary views on the subject we must conclude that there is a significant skepticism surrounding it that can otherwise be found in the academic world as well as the world of popular media. During a Q&A session on Reddit in January 2015, the former head at Microsoft Bill Gates said, “I am in the camp that is concerned about super intelligence.” Referring thereof to computer technology.
It is important to understand the misconception on our concepts here. When we are talking about artificial intelligence, in opposition to popular belief we are not talking about machines that think in the same way we do. We are however talking about thinking machines, for example a machine is thinking simply if it is calculating a math problem.
A simple calculator does this. But we never thought of our calculator as thinking machine did we? Neither would we ever conclude from this that pocket calculators might decide they’re tired of calculating and make an uprising.
Thinking is not enough. As a human one does not simply think, one makes thoughts. We consider ideas none of which are required of us or necessarily in line with our brain stimulation. Some however have brought question to this, because how do we even know our thoughts are our own and not all derived from necessary brain stimulation? It is in essence the question: does one fall in love because one had a chemical reaction in one’s brain? Or did one have a chemical reaction in ones brain because one fell in love? There appears to be no certain way of telling. Its like the question: who came first, the chicken or the egg? As it stands we remain ourselves as the best observers of consciousness from the inside of our own mind. David Gamez said in his paper The measurement of consciousness: a framework for the scientific study of consciousness “Consciousness can only be measured through first-person reports, which raises problems about the accuracy of first-person reports, the possibility of non-reportable consciousness and the causal closure of the physical world.”
As David Gomez correctly points out this is a problem because how do we know we can trust our own experience? How does one know one made a decision? What if one thinks one made a decision but in reality just did what ones brain told one to do? Although this ought to make us skeptical about any argument that relies on our own personal experience, it also underlines the fact that we do have our own experience, in fact we all have the same personal experience and is that then not a collective agreement on reality even if not in a lab setting or a math equation? If our decisions are not our own, how come we are sitting here having a hard time figuring out what to decide in regards to our thinking? Is this entire thing not just a big example of our own individual reasoning? And is that reasoning not going to influence our thinking? It might not, but that’s not the point, the point is that it could, and given that it could that means that we are in fact considering concepts on our own. Does that not then prove that we as a matter of fact know from our own experience that we are conscious?
The responsibility of conscious decisions that are entirely up to ourselves permeate our everyday life’s. For instance, when we wake up to go to work our immediate brain stimulation might remind us that we want more sleep. Nonetheless, we would be able to resist the urge to stay in bed even though doing so might prove to be a challenge.
We are essentially “considering” the consequences of not getting up and concluding that they are greater than the immediately felt more obviously emotional consequences of getting out of bed. But even then, even when concluding that the biggest gain in value would be to get out of bed we still sometimes chose the lesser option for its ability to immediately please us and still then we also often chose the option that to our immediate senses seem unwanted but in our rational mind proves to be a necessary dissatisfaction.
There is no mathematically certain way of picking either one. We do it on our own volition without restricting ourselves to any prior guidelines regardless of whether or not such guidelines have been presented to us. One could certainly take an MRI of a brain and from that, one might conclude that this is the type of person who is more likely to stay in bed but that is in no way a certain assessment. It is simply a realization that we all have preferences that we tend to lean towards. And any preference can be used as a tool to measure probability, simply because of our free will we are more likely to do what we want to do. Even if there exists people who often do not do what they want, on average the numbers would lean in favor of those who do. As such, one could make a prediction to what he might do before he does it but that would not prove that he did not chose to do it. It is like our mom predicting which song we will put on in the car because she knows we like that song. Even though she knew before we did it. We still did it on our own. And we could still have picked another one and sometimes we do exactly that and there would be nothing stopping us.
Consciousness according to the Oxford dictionary can be said to mean “The state of being aware of and responsive to one’s surroundings:” But even this we have computers today that are able to do. For instance, our car’s GBS uses satellite data to conclude our location. We have been able to make machines that sense space around them for a long time using lasers and other sensory equipment. The new Google car can drive itself using our GPS and some sensors that detect its surroundings, so strictly speaking all of these artificial computers already are conscious. But that’s not what we mean when we say “artificial consciousness” is it? It is not what we portray in our movies. What we mean is that it makes its own decisions. Like how we make our own decision on whether or not to agree with this paper, or whether to go to sleep, or wake up, or eat and all are not dictated to stimuli but complemented by it. A brain might tell us that one should go eat, and the brain might be good at tempting us to do so but still one might not go. One is capable of not following orders. And since one is, that would imply that one’s brain is not in fact ordering one at all even if some stimulating suggestions might be harder for one to disobey then others. No matter how much one want that ice cream, it is still conceivable that one might not eat it.
In “Ex Machina” Nathan has created a machine and it is our main characters function in the plot to decide whether or not the machine in question is “conscious”. However if consciousness is simply the ability to sense our surroundings then he could have concluded that from the first five seconds of their first meeting. Already then Ava is walking around looking and interacting with her environment clearly perceiving it and thus conscious even if she turns out to be completely hollow inside. By the same token we could conclude that these movements require some programmable calculation and thus she has to be thinking. So right off the bat his work could have been done in the first five seconds. Therefore it is conscious and its thinking. The end.
But hold on, that’s not satisfactory is it? Because this is clearly not what is meant with artificial Intelligence. We want to know if Ava is making her own decisions or whether there is a man sitting behind her pulling the strings. It’s relatively easy to create a computer that “thinks” or that “perceives” its surroundings. We are not interested here in how the machine senses the world but rather we are interested in knowing how it makes decisions, what would it chose to do with its sensory information and how would it chose. We are mainly focused with the Artificial Intelligence that would allow for the art of decision making. What the ancient man might call “freedom of the will”.
So what does that even mean? Well as we showed in the analogy with the human getting out of bed we all experience volition on a daily basis. We all know what it’s like to be in charge of considering a topic or an idea or even multiple ideas. We all have an insider’s experience with this. We could say we are all our own walking laboratories for studying free will.
It is in all of our experience obvious that humans inhabit this capability but could machines do the same? For the time being none of them do. They can all think, calculate move, coordinate and sense but none of them inhibit any sense of free will. But could they even do that? could a computer have volition?
Well then what is a computer? According to the oxford dictionary a computer is an electronic device which is capable of receiving information (data) in a particular form and of performing a sequence of operations in accordance with a predetermined but variable set of procedural instructions (program) to produce a result in the form of information or signals.
So, a computer is a machine that runs computer code. The original computer could be argued to be the Turing machine, invented by Alan Turing during the second world war. The Turing Machine was a machine created to crack Nazi German coded messages by trying a series of possibilities. It knew that every message had the words “Heil Hitler” in German so if it ran across a code that revealed these worlds it concluded that it had found the code that made sense of the message.
And this is basically how every computer works. There is a constant reliance on the “if this, then that” model. If “hail Hitler”, then “code complete”. All computers work like this in one way or another. It has a function to determine some value (if this) example: a sensor that determines whether a color is blue. Then a function to tell the computer what to do if that value is true (then that) example: move to the left. Example put together: If one detect a blue color then move to the left.
This is a rudimentary example but even the most complicated computer functions work on this one principle. When making a decision it is always referring back to the commands already present in the code. The code could be “pick a random number between 1 and 10. If that number is 6 then jump twice“. This is essentially computer code.
Thus goes the argument. Because any code that solely relies on pre written commands to make decisions is not going to be able to produce volition. Free will is never present because nothing is every freely chosen. Not in the way that we chose to get out of bed. In a computer, every decision is based on preexisting rules and values. No matter how complex.
Some computer programs have millions of lines of code. Thousands of different commands resulting in a wide variety of possible actions or reactions. For instance, in Ex Machina Nathan (the genius inventor) explains how he made Ava conscious by using “Bluebook”(his version of the Google search engine) to secretly record all human input into the computer, going as far as hijacking everyone’s web cameras and recording their facial expressions making up a library of millions upon millions of human behavior, answers, questions, facial movements and behaviorisms. Putting this all together he explains that Ava works because he made her a vast library of “if this then that” commands. And tough this is a unusually satisfying answer in terms of the average movie goer who no doubt fund this plausible, in reality this would not be conciseness. Ava then is not conscious. Because even though she seems to be highly reactive, she is still deriving every single action from the commands in the code and not of her own volition. It just so happens that she has a large range of actions, but she never chose a single one of them. She never woke up considering whether or not to get out of bed and if she did she would only be doing it because she was mimicking the behavior of someone else.
Maybe if the color is blue she will pick a random action between three different options. But she still has not chosen anything on her own. She has essentially thrown a dice and picked among preexisting options by the random value that the dice comes up with. This is not free will. This is not volition. We can all agree that when we make a decision we do not do so solely based on what our brain stimulation tells us to do in this situation, even if our brain stimulation might often succeed in tipping us in a particular direction. If that were true, every choice would be a lot easier to make. Nevertheless we find ourselves existing in a brain that often finds it difficult to decide upon the right course of action or thinking thus we clearly make decisions on our own and not based upon required commands. One could say that the brain is a filter that we as a conscious being view the world through. As such, the brain stimulation can skew and influence what one does but it can never fully dictate it and if it could we would all conclude that one is no longer a sentient being, one would be something separate from the rest of us. More like a machine.
Furthermore, we can all agree that we do not make our decisions randomly as if by a throw of dice even though we might sometimes do this we must also realize that we mostly do not and even when we do we first made a free decision to do so. Then the conclusion to take away from this is that our current way of computing is not capable of actual Artificial Intelligence comparable to the volition of human beings. This to clarify is not due to limitations in current computer power, the end line seems to suggest that no matter how powerful our computer gets they will never be capable of freewill.
If what we mean by “artificial intelligence” is the ability to comprehend, think, sense and calculate, so on so forth then not only is it possible, as a matter of fact the machine uprising has already happened, we missed it. In fact, if that is the case then the cell phone and the pocket computer already has artificial intelligence. But if what we mean by artificial intelligence is “the capability for free will” then the truth is that there will never be artificial computers as long as computers run solely on computer code.
Chalmers D. J. “On the search for the neural correlates of consciousness, in Toward a Science of Consciousness” The Second Tucson Discussions and Debates (1998) 219–229
Monin, A S. “On the Definition of the Concepts Thinking, Consciousness, and Conscience.” Proceedings of the National Academy of Sciences of the United States of America 89.13 (1992): 5774–5778. Print.
Michael Negnevitsky “Artificial Intelligence: A Guide to Intelligent Systems” in Pearson Education(2005) 1-29
Gamez, David. “The Measurement of Consciousness: A Framework for the Scientific Study of Consciousness.” Frontiers in Psychology 5 (2014): 714.PMC. Web. 20 Nov. 2015