Is Artificial General Intelligence Possible?

We all know that general intelligence is possible. We have living, breathing proof right in front of the mirror. Humans ask wide-ranging questions, solve general problems, and pursue knowledge with an unending passion. While billions of years of evolution has developed at least one instance of general intelligence, is it possible for that intelligence to create an artificial general intelligence (AGI) capable of similar feats of thought? Until now that is a problem that we have not proven to be solvable, but I am convinced that it is.

In my last post, I listed what I thought was a complete set of traits that defined intelligence. Without any one of these traits, intelligence would be questionable. With all of these traits present, we would almost certainly have intelligence. Now we'll explore these characteristics in more depth to see if and how it would be possible to develop them in an artificial intelligence. We'll start with what the AGI would use as input.

Can it make an observation?


The human body has incredibly high-resolution sensory organs for collecting information about the world around it. We have eyes that sense light with 120 million rod cells and sense color with 6 million cone cells. The human eye is complex, efficient, and versatile, yet it is beyond modern camera technology in only a couple of ways, namely it's resolution and field-of-view. We have ears that are better than high-end microphones in some ways, but not in others. Our senses of smell and taste are much harder to replicate artificially, and the largest organ of the body, our skin, makes our sense of touch such a finely-sensitive source of input that it is not likely to be duplicated with technology any time soon.

In some ways we could duplicate the input sources of the human brain for an AGI, but that may not be necessary. The Internet would provide more than enough input for that purpose. The real issue is the next step, what to do with all of that information. The human brain receives an astounding amount of input every second of every day, but it has powerful mechanisms for filtering that information and doing pattern recognition.

Pattern recognition is a big part of machine learning, and we're making great progress in pattern recognition algorithms for use in all kinds of applications like self-driving cars, intelligent assistants like Siri and Cortana, and a whole host of big data problems. I don't see pattern recognition as a big hurdle for AGI. It's already happening.

Being able to recognize a road sign or a car or a person isn't useful on its own. What do we do with the pattern after we recognize it? Going from recognizing patterns to knowing which patterns are important and choosing which patterns to use to solve a given problem may seem like a big step forward, but maybe not. A self-driving car is going to have to "know" what to do with the patterns it sees, so pattern appreciation will be a necessary part of a working system, and those systems are starting to work. The scale of pattern appreciation that a self-driving car would have to do is small compared to what a human is capable of, but there's no inherent reason that the number of pattern associations, comparisons, and classifications couldn't be increased dramatically. The mechanism for doing this would have to be automated. The AGI would need a way to make associations as part of a training phase instead of having them programmed in manually, but that doesn't seem like a limiting factor.

The real question is would an AGI develop a sense of beauty surrounding the patterns it sees? Humans have a higher appreciation for especially pleasing patterns. We call it art, or in the case where it's not considered art, like in mathematical proofs or chess games, we call it elegant or beautiful. I think a sense of beauty is an emergent property. Once an AGI has gained enough experience with enough different patterns, and it is making associations between those patterns, it will start to notice patterns that can be linked together to form more efficient, elegant solutions. At some point the lines blur and developing an appreciation for elegant solutions becomes an appreciation for beauty. Our sense of beauty for art and music is strongly related to how we recognize patterns. That trait in an AGI would be no different.

Learn by experiment


We learn about our physical environment as much through experimentation as we do observation. The two abilities are inexorably linked. From the moment we can flail our arms around and clutch things, we begin experimenting with concepts like gravity, causality, object permanence, material properties, fluid dynamics, thermodynamics, etc. We don't know these things by such technical names when we're first learning them, but we are learning them all the same. When your toddler is sitting in her highchair, picks up her spoon for the 5,000th time and drops it on the floor, she's testing gravity. She's learning about cause and effect. She's seeing if this time will be like all of the other times she's let go of that spoon and it fell to the floor. Maybe this time will be different, she thinks, but each time it's confirmed that if she lets go of the spoon, it's going to go splat on the floor, full of mashed potatoes or not.

This constant experimentation is part of the human experience and important to our development and survival. The basic structure of experimentation is a pretty simple feedback loop, though. Nothing too complicated is going on. We have the ability to manipulate our environment, observe what happens when we do, and store that observation—pattern recognition and all—away in memory to be recalled later when a similar pattern arises. The fact that this feedback loop is so extremely tight and massively parallel is probably one of the main sources of our intelligence.

Being able to almost instantly recognize a pattern as a cause and remember its associated effect within the vast field of experiences we accumulate over years of observation is a monumental computational task. But the underlying mechanism is fairly straightforward, and nothing prohibits it from being implemented in an AGI. We may not fully understand all of the filtering and association mechanisms that the brain uses, but there is nothing fundamentally magical about this feedback loop.

Once an experimentation loop is in place, the ability to learn cause and effect naturally follows. Events connected in time get associated with one following the other. Of course, more and better experience results in more accurate cause-and-effect relationships. Not everything that happens close in time has this relationship, so more data and purposeful experiments provide more proof of causality. The next step is that more high-level thought processes emerge. Understanding causality leads to the ability to plan ahead and to start predicting likely events. Such a thought process would look like forethought. If those thought processes turned out to be correct most of the time, it would look like the AGI was exhibiting good judgment. Underlying these seemingly high-level thought processes is massively parallel computation to run pattern recognition on an enormous data set, but that's all it needs to be.

Problem solving could be another example of an emergent behavior. Given a problem that needs solving, a certain amount of pattern recognition, experimentation, and planning should result in a strong ability to solve that problem. Finding the right pattern that leads to an applicable experiment and eventually a good solution ends up looking like insight. Most of the great discoveries of history that are retold as flashes of insight actually required great amounts of searching, observation, and experimentation before the right pattern was found.

This problem solving ability quickly develops into using and then making tools. Humans have created all kinds of tools, from the physical tools like a hammer or a computer to virtual tools like mathematics or programming languages. Maybe an AGI would start with virtual tools—programming libraries, frameworks, protocols, and languages—but that could certainly evolve into controlling robotics and an ability to more directly manipulate the physical world.

There Must be a Motive


Having incredible problem solving skills and tool making abilities is all well and good, but without the desire to work for a goal and the ability to decide what to work on, an AGI would not seem terribly intelligent because it wouldn't have a mind of its own. It wouldn't have autonomy. It wouldn't have motivation. This characteristic of motivation is tricky. It seems to be a fundamental characteristic that underlies everything else. Decision making and goal setting naturally follow from it, but how does motivation develop?

To explore this concept of motivation, let's look at a high-level behavior and approach it with the tactic of a four year old. We're going to keep asking why, and see where it takes us. Since I'm a programmer, we'll start with that as a question.

Why do I program? Well, there are many reasons, but let's go with because I like it.

Why do I like to program? Because I like the challenge of solving puzzles and the satisfaction I get from finding a solution.

Why do I get satisfaction from solving puzzles? Because solving puzzles is a useful ability to have. My ancestors were good at solving puzzles like how to earn a living or before that, how to grow food to eat.

Why do I need to eat? Because I need to survive long enough to reproduce, raise children, and provide for them until they can survive on their own.

Why do I need to have children? Because my DNA needs to continue replicating, and biologically, having children is how that happens.

Why does my DNA need to replicate? Just because, alright!

It looks like we've hit bottom. Keep in mind that these are the purely biological reasons for each of these responses. Of course we have other reasons to live and have children, but the biological reason underlying it all is that our DNA must go on. DNA has an innate need to replicate. Why is that? It's simply because if it didn't, then we wouldn't be here, and neither would any other living thing—plant, animal, or otherwise. Think about it. The DNA that has the strongest will to survive is the DNA that does survive. If there was no will to survive, DNA in any form wouldn't exist. Take two variations of DNA, one that maps out a being with the will to survive and one that maps out a being that doesn't have that will, and the one with the survival instinct will be the only one that's left. That's how natural selection works.

Humans have an innate, fundamental motivation to survive, and most of our diverse behaviors can be boiled down to that motivation. Some exceptions defy this logic, but they tend to not survive long for one reason or another. So humans certainly have motivation. How would an AGI have motivation? The most likely scenario I can think of is that, like humans, the motivation comes from what the AGI was originally programmed to do. Humans evolved from replicating DNA, so our fundamental motivation is survival and reproduction. That has resulted in such high-level behaviors as the pursuit of knowledge. An AGI would likely develop from an equally powerful motivation. Hopefully it wouldn't be the same motivation to replicate, because then it would be in direct conflict with us, but that's a topic for the next post.

One motivation that I can think of that would result in an AGI would be attempting to classify statements as true or false. Gödel and Tarsky proved that in general such a thing was impossible, but we could go quite a long way without proving every statement true or false. If we were able to create a program with the other characteristics enumerated here and set it to the task of proving the truth of statements, I could easily see that program becoming intelligent as it learned everything it needed to know to prove every statement it came across, barring the obviously unprovable "this statement is false" kind of statements.

This AGI would develop logic, reasoning, and inference skills from its basic pattern recognition and experimentation skills. After accumulating enough knowledge, it would discover conflicting information within concepts that it generalized too broadly. It would start to understand the necessity of context and be able to tolerate ambiguity. Surely it would produce surprising results and offer novel approaches to problems, showing a development of creativity. That creativity coupled with its motivation would appear to us as curiosity, for what is curiosity other than a passion for knowledge?

In order to solve this grand problem put before it, this truth-seeking AGI would certainly learn language and how to communicate. It would have a wealth of knowledge on humans—how we behave, how we relate to one another, what our motivations are—so that it would come to understand us. That would be empathy. At some threshold, with all of the knowledge it was accumulating, it would start to understand its own place in the world, that it can affect the world, and that other things and people interact with it. At that point I would say that consciousness has emerged, and the AGI has become self-aware. Once that happens, then free will is a concept that it would certainly be able to internalize and exhibit.

These last characteristics may be controversial for an AGI because we think of ourselves as special and unique, but I don't see how consciousness and free will are anything but the result of an intelligent entity understanding how it relates to the world. It's not possible to create an AGI that has enough knowledge and understanding to solve general, arbitrary problems but not become self-aware and exercise free will. The medium doesn't matter. Any true intelligence will show these traits.

Most of the traits explored here appear to be built on top of a few basic ones. Motivation, observation, experimentation, and a massive memory store seem to be the most fundamental traits. The rest are behavior and characteristics that emerge from those basic traits. This interconnected hierarchy of traits is what collectively makes up intelligence, but they are abstract traits woven together in an inseparable way, not concrete building blocks that can be selected and stacked together. Like Douglas Hofstadter speculated, I believe that intelligence emerges from layer upon layer of large quantities of simple components that all fold back on each other. The nondeterministic nature of this emergence of AGI sounds scary because of the lack of control and the sheer unknowns involved. What could happen if we actually develop AGI? That will be our topic for next time.

0 Response to "Is Artificial General Intelligence Possible?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel