Will we one day live in a world where humans aren’t just unintelligent but irrelevant?

This question has been asked by many scientists working in the field of computer science over the past 70 years. As programming began to pick up steam in the 20th century, researchers like Alan Turing and John von Neumann began to wonder if machines would accelerate to the point where they wouldn’t rely on mathematicians and programs anymore.

It was von Neumann who first used the term “singularity.” Von Neumann, a mathematician and one of the most important figures in computing history, watched as his projects grew more intelligent.  He was expressing both surprise and concern with the idea that “the ever-accelerating progress of technology and changes in the modes of human life, which gives the appearance of some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

Since then, the world o “technological singularity” was first used by Vernor Vinge, a science fiction author and researcher. He believed the singularity to be a time where “we will soon create intelligences greater than our own.”

According to Vinge, the word ‘singularity’ applied directly to this scenario. The creation of greater intelligence would mean “human history will have reached a king of singularity, an intellectual transition as impenetrable as the knotted space time at the center of a black hole, and the world will pass beyond our understanding.”

What the singularity is and when it will occur are two hotly contested concepts, particularly as artificial intelligence and machine learning developments continue to gather both speed and applications in our daily lives.

What is the singularity, and what does it mean for humans? We’ll explain here:

What Is the Singularity?

A singularity definition describes a period when computers are essentially smarter than humans. More specifically, it’s a time where computing power will exceed brain power.

Vernor Vinge says there are four ways this could happen:

  1. Computers will become “awake” and will advance to superhuman intelligence.
  2. Large computer networks may someday “wake up” and become superhumanly intelligent.
  3. Computer and human interfaces become so intimately entangled that superhuman intelligence occurs.
  4. Advancements in biological science result in dramatic improvements to human intelligence.

He published these requirements in the early 1990s based on the improvements his contemporaries made in computer hardware design. Back in 1993, he thought that if things continued forward at the same pace, the singularity could occur within the following 30 years.

However, Vinge wasn’t the first to predict the advent of superhuman intelligences. Scientists like Charles Platt and others had already predicted the singularity would occur within 30 years – several decades before the onslaught of the discussion in the 1980s and 1990s.

The potential for the singularity isn’t simply based on the amalgamation of successes in various fields of research; it’s also based on a popular concept known as Moore’s Law.

Moore’s Law

Moore’s Law describes a trend in computer hardware that predicted that the number of transistors in a dense integrated circuit would double every two or so years. At the time, Moore predicted that this would continue into the foreseeable future.

Moore knew a thing or two about transistors in circuits. Although he described Moore’s Law in 1965, he would go on to co-found Intel in 1968.

If you want to understand whether the Singularity is near, as some futurists suggest, you’d also need to accept that Moore’s Law will continue. But even Gordon Moore now believes that there are only one or two decades left in the trend because chips are now approaching the size of atoms. Chips that small are natural barriers to adding more transistors; though, the problem could be solved by moving backwards and making bigger chips.

The Singularity: Gordon Moore Meets Ray Kurzweil

The idea of the singularity as a real concept – not just a science fiction trend – was popularized by Ray Kurzweil. His idea is based on a similar premise to Moore’s Law. More transistors mean greater computing power, and if the number of transistors will continue to grow at a rapid pace without ceasing, then it stands to reason that eventually computers will process information faster than humans.

The singularity, then, is achievable because if we can create computers smarter than humans, then the computers can also then create an intelligence even more intelligent than the one designed by humans, with limited knowledge and processing power. The result would fulfil the same obligations laid out by Vernor Vinge.

Ultimately, if Kurzweil and others are correct, the intelligences can then create more intelligent beings in rapid fire, and technological advances will occur at a rate best described as break-neck speed, but in reality, we likely couldn’t fathom how quickly changes will be made.

Living in the Singularity

If this all sounds far removed from reality – it is.

When you ask futurists or singulatarians what the world of the singularity looks like, they can’t tell you.

It’s difficult to imagine a world where humans don’t occupy the top billing on the intelligence food chain. Will computers control us? Will we serve them? What if they simply serve as aids? Will there be a kill-switch?

The only thing certain is uncertainty – and risk. It is unclear whether superhuman intelligence will help or hurt humans or whether it will serve as a sort of existential threat to humanity. Some say that because the path towards the singularity is already in place, it’s possible to watch and learn as the time approaches to make the most of the major transition that would occur whenever the singularity arrives.

Indeed, humans with an internet connection already rely on the combination of artificial intelligence and biology to live our lives. Pilots only spend an average of seven minutes manually flying a plane before the auto-pilot takes over. Relationships now begin – and increasingly end – online, and the matching is facilitated with digital algorithms. Big data and machine learning are increasingly making business decisions on behalf of entire teams.

The only thing to be done is to speculate. Speculation has thus far lead to an age some describe as post-human.

The Post-Human World

The combination of superhuman intelligence and advancements made in the biological sciences could, as some say, propel humans into the post-human world.

The post-human world describes a world after humanism, but it’s not easily defined. In the context of the artificial intelligence takeover during the singularity, it would represent the most pessimistic outcome to transhumanism – where biology and technology converge to help humans overcome the obstacles of their biology.

Nick Land suggests that if artificial intelligence were to take such a hold over the world, humans would become irrelevant because there’s no place for the flawed intelligence, particularly as machines continue to become smarter even after reaching the superhuman level. Meeting this period would require humans to accept that they’re not needed anymore and to prepare for their own demise or even extinction.

Is the Singularity Near?

It depends on who you ask.

Rather than thinking about when the singularity will occur, it is at this point and time more useful to consider whether it’s possible. There’s a great debate between technological optimists and pessimists, and the evidence thus far suggests either could be right.

If you don’t accept that the singularity is possible, then it’s the end of the discussion. But if you are an optimist, then you might wonder not just if but when.

There is a timeline that continues to grow in influence.

Kurzweil offered the first timline in his book The Singularity Is Near. He predicts that computer systems with computing power greater than the brain will arrive by 2020. However, there won’t be a real need for them until artificial intelligence reaches its full potential, which Kurzweil predicts will occur in 2029.

Kurzweil’s predictions set the singularity at 2045.

How will we get there? Let’s look at the timeline provided by Ray Kurzweil on his website:

  • 2019

Humans could begin to develop relationships with Artificial Intelligence. Autonomous vehicles begin to become more common place. Language translation transforms conversations in real time.

  • 2029

Personal computers become more powerful than human brains. Advanced brain mapping allows greater understanding of the brain to translate into machines. Computers can not only learn but also create new knowledge. Virtual reality hardware is shrunk and transformed into implants.

  • 2030s

Nanomachines will be inserted into the brain to control signals and elicit emotional responses. The machines allow for mind uploading and humans may become software based.

  • 2040s

Virtual reality will become our reality as full-immersion becomes the standard. Biological intelligence will pale in comparison to biological intelligences as artificial intelligence becomes billions of times smarter.

  • 2045

The singularity has arrived. The $1000 personal computers smarter than human intelligence in 2029 will be replaced by personal computers billions of times more intelligent than the cumulative intelligence of all people.

Are You Ready for the Singularity?

Ready or not, the singularity could already be on its way – and the implications for humans remain unknown.

Serious benefits are already predicted. Machines could solve some of our most pressing problems from cancer to climate change. But what will it mean for human intelligence to be irrelevant in comparison to artificial intelligence?

What about you? Do you agree that the singularity is near, or will we still be 30 years away three decades from now? Share your thoughts in the comments below.