Artificial intelligence and machine learning are built on the premise of feeding lots of data into an algorithm that allows a computer to learn a useful set of skills. Facial recognition, self-driving cars, your Amazon recommendations, and virtual personal assistants like Siri all use some form of algorithmic AI to produce good results.

These results require significant computing power to generate, and the boom in AI applications has heightened the demand for processing chips built specifically for AI.

The Changing Landscape of AI Chips

With announcements from Apple that the new iPhone X will include a mobile AI chip, and Google announcing it will develop its own chips for its cloud processing centers, the chip market is in upheaval. Traditional players like nVidia that have dominated the AI processing landscape are now less secure, and with AI-specific chips on the rise, traditional chip manufacturers are struggling to keep up with the changes.

One incredible illustration of the changing chip marketplace is Cerebras, a startup chip manufacturer. With top engineers working on a chip specifically for AI training, Cerebras already has a valuation of $860 million, and they haven’t even released a product yet. That’s how big the chip market is and the potential upside of developing AI-specific chips.

CPU, GPU, ASIC and the Implications for AI

To understand how this all works, it’s important to get a grasp on the different types of chips currently available. In the early days of data centers and neural networks, most companies were using central processing units (CPUs) not unlike the motherboard chip in your personal computer. These did the job but they were inefficient and slow, costing time and money to produce results.

Soon, researchers discovered that graphics processing units (GPUs) could handle the workload more efficiently. Since they had been designed to do the complicated math of graphics rendering, these units were well suited to processing AI’s mathematical demands as well.

However, in recent years, chip manufacturers have begun to focus on building chips specifically for AI. These Application Specific Integrated Circuits (ASICs) are designed to do one thing: run neural networks as efficiently as possible.

AI Chips in Cell Phones

The cell phone gives an interesting example of the applications of AI-specific chips. Apple will be including a supplementary AI-specific chip into the iPhone X alongside graphics and computing chips that run the other software on the phone. The idea is to process much of the AI requests locally on the phone, instead of sending the request back to Apple’s data centers for processing.

Apple isn’t the only phone manufacturer interested in AI chips, and in the near future, we can expect to see avenues for app developers to incorporate AI and neural network capabilities into the apps we use, beyond Siri and other current AI solutions.

Training and Execution: Two Separate Problems

The chips in mobile phones are an example of AI chips that will aid with the execution of machine learning. However, one chip doesn’t fit all (yet), and there are actually two major parts to any implementation of artificial intelligence.

1. Training a Neural Network

Before anyone can use artificial intelligence, the machine must first learn how to do its job. This process involves writing a machine learning algorithm so the machine knows what to look for. Then, researchers feed the machine enormous data sets that teach the machine by example.

Processing these training data sets is one challenge for chip manufacturers, and the race to create a solution for an AI training chip is on.

2. Execution of Machine Learning

After the AI has been successfully trained to produce correct outcomes, users can begin to benefit from its abilities. When thousands or millions of users need instant recommendations from an AI, a new computing problem comes into play. How do you process so many requests at one time?

Google’s New TPU AI Chip

Those two problems of machine learning are what make Google’s recent announcement about its proprietary chip so exciting. The new tensor processing unit (TPU) is designed to run on Google’s cloud services servers, and it has the capability to both train and execute AI. Since Google has designed the chip from the ground up, it promises to run exceptionally well on Google’s servers. It also means that Google will be decreasing the number of chips it purchases from outside manufacturers.

With Google leading the charge, expect Facebook, Amazon, and Apple to prioritize AI chip development over the coming months and years. Competition is heating up fast in the chip market for these customers, and the direction of AI chip manufacturing is still open for debate.

How Will Chip Manufacturers Respond?

Intel, AMD, IBM, nVidia, and other competitors in chip manufacturing will need a quick response to the growing demand for AI. Whether that comes in the form of new ASIC chips built specifically for machine learning or upgrades to existing models remains to be seen. There is also incredible opportunity for startup companies to develop new chips from scratch that could further rattle the chip manufacturing industry.

Right now, it’s anyone’s game.

Image Source: Adobe Stock