One of the most interesting concepts in computer science is the development of artificial intelligence. Ever since humanity invented the first simple computers running on punch cards and vacuum tubes, science fiction has considered the far-reaching implications of artificial intelligence. But what is artificial intelligence really? Will the concept of AI change as much as the concept of computing itself has changed over the past century? What recent advances are making this concept a reality?
Defining the Artificial Intelligence Computer
Originally, the Merriam-Webster definition of artificial intelligence is the capability of a machine to imitate intelligent human behavior. However, the Merriam-Webster definition of artificial intelligence has changed as our basic understanding of intelligence itself changes over time.
Studies in animal behavior and psychology provide a very good analogy for the problems of this concept. For instance, take a human and put him out in the wild. No clothes, no heat, no food, water or tools. Would it be fair to assume that the bears running across his path might consider him a little on the unintelligent side? After all, they don’t have any problem taking care of themselves whereas the soft, squishy (and very tasty) human seems to be completely handicapped.
Perhaps this analogy isn’t truly appropriate. Humans have certainly adapted in their own way by creating the crutches they use to survive the elements of the world. After all, try clean shaving the bear, putting it in front of a computer and telling it that it needs to make a minimum of $1,000 a week by writing software in order to eat, drive and have a place to sleep.
The comparison just isn’t fair. For a more subtle explanation, consider the myth of IQ. Recent studies have probed the myth of the intelligence quotient in order to truly understand what it attempts (poorly) to define. The short answer: Short-term memory, reasoning skills and verbal ability (i.e., the ability to communicate these concepts one way or another.)
As a result, perhaps we could attempt to view the future computational power of artificial intelligence using this definition for lack of a better one.
Famous Examples of Artificial Intelligence
Rudimentary examples of AI are already in existence today. For instance, AlphaGo was able to beat one of the world’s best Go players last year by creating new approaches to the game. This is an achievement on par with Garry Kasparov’s defeat at chess by Deep Blue in 1996.
But what happens when we create machines capable of doing more than one task well? Will they be able to actually think like humans or will they be something very alien indeed?
It’s hard to have a conversation about artificial intelligence without considering fictional concepts such as Hal9000, Skynet and The Matrix. Shudder the thought. The main gist of these characters, of course, being that once we let the genie out of the bottle, these machines will find us useless and flawed. They’re also lacking human empathy and may finally decide that the world is better off without us.
Worse, if they think too much like us, will they be able to commit the same mistakes? After all, humans are quite capable of creating Skynet’s death toll without the aid of artificial intelligence.
Perhaps the warm and fuzzy examples from fiction are more likely, considering that it should be possible for artificial intelligence computer creators to build in a fail-safe that prevents harm to humans and other life–like the museum curator in H.G. Well’s The Time Machine, who was an infinite font of both human knowledge and witty personality. What most of us seem to be holding our breath for though is Rosie the Robot from The Jetsons. Ah, now THAT would be grand!
But is computer artificial intelligence really a possibility? How close are scientists to truly emulating the processes that the human brain completes without difficulty? Closer than they were, but still galaxies away.
Current Developments in Artificial Intelligence
One of the most recent feats towards artificial intelligence in computer science was achieved by a research team at Okinawa Institute of Technology Graduate University in Japan and Forschungszentrum Jülich in Germany. They were able to simulate roughly one second’s worth of human brain activity in full. Of course, it took 82,944 processors via NEST simulation software and one of the world’s fastest supercomputers, the K computer. This experiment, completed in 2013, created a neural network that simulated nearly 2 billion nerve cells. These cells were connected by over 10 trillion synapses. Of course, this is only a small sample of the neurons in a human brain. Our brain contains roughly 100 billion neurons. It also took roughly 40 minutes to complete the 1 second’s worth of processing.
A recent discovery may mean that we are an entire universe away from achieving this goal. A research team at UCLA discovered that our brains are 10 times more active than we previously thought. To make a long story short, neurologists originally thought that our dendrites only pass information to our somas. It turns out, they have their own activity as well and produce ten times as much as somas do. Furthermore, scientists believed that a nerve’s action potential was either on or off, just like what you would find in the binary computing systems we have today. Instead, these dendrites have huge fluctuations in the size of the action potential spikes they create, much like what we would expect in a quantum computing system.
So we are definitely going to need those quantum computers, and scientists are definitely making progress toward this goal. A team at University of Bristol’s Quantum Engineering Technology Labs is using machine learning to help analyze quantum systems to advance this development. In short, they are using some of our earliest concepts of AI to help build more AI. You will find examples of the machine learning artificial intelligence computer program everywhere, from Microsoft’s Azure to Amazon’s Alexa.
Perhaps what it really boils down to is a matter of adaptability. Take our bear analogy again. The truth is that any human with a true sense of scruples who finds their self in such a situation would likely find a sharp rock and a tree branch, sharpen the branch, kill the bear, use the sharp rock to skin it and extra wood to cook it after rubbing the sticks together to start a fire. Voilà! Protection, clothing, and dinner in one fell swoop.
Create a computer that can adapt in a similar way to this rough analogy, and it might be approaching the newly-discovered computational power of the very adaptable human brain. Do you think that a computer may be able to meet or surpass this goal in our lifetimes? Share your thoughts below!