The Race for AI Chips: Why Governments and Tech Giants Are Fighting to Develop the First Next-Gen Chip

America has dominated the computer chip market for years — but with the advent of AI, maintaining this dominance and investing in next-gen AI chips has taken on a new urgency.

Amazon, Google, Nvidia, and Microsoft have been joined by DARPA and a slew of smaller startups in the race to create a flexible, efficient, and powerful chip architecture that can power deep learning processes.

The chip development field is currently incredibly dynamic due to the explosion of interest in new chip development. New chip architecture emerges on a monthly basis at least, with some innovations more promising than others. To survey this complex and rapidly evolving market, NewtonX consulted with 25 executives at major US chip-making companies (or companies investing heavily in chips). These included Amazon, Graphcore, AMD, and Nvidia. The results of these consultations inform the data and insights in this article.

Achieving AI Software and Hardware Symbiosis: The AI Chip Developments

Around 2015, interest in chips began to explode. By 2017, VCs had tripled investment in the space, and by 2019 investment in the space hit close to $1B. Just last year, Graphcore secured $200M in funding led by BMW and Microsoft. Elsewhere companies such as Wave Computing and Mythic Inc. have likewise raised tens of millions in funding. This only paints a small part of the enormous picture, though. Public companies including Nvidia, Qualcomm, and AMD, not to mention Amazon and Google, have also entered the fray.

So what are these chips that everyone is after?

Until recently, AI software has mostly run on graphical chips (GPUs). These chips have high capacity for parallel processing as well as a higher capacity than CPUs for transistors. However, as Moore’s Law has begun to slow, more and more researchers have started positing that microchips designed specifically for deep learning computations could be even more powerful than traditional GPUs. Many researchers believe that we’ve reached the lowest voltage possible for GPUs. This means that, while it’s possible still to add more transistors, we’ve reached the limit for how fast they can go. Laptops, for instance, still run at 2Ghz; they just have more cores in them. For machine learning, we need thousands of cores. Consequently, depending on old GPU architecture may be insufficient.

At MARS Conference, a researcher with MIT, funded by DARPA and working with researchers from Nvidia, presented Eyeriss, a new deep learning chip that allows for edge computing and 10-1,000x higher efficiency than existing hardware. The chip is flexible enough to be adapted for numerous applications, but is also efficient. These are two features that had previously been difficult to balance in a single architecture.

The Eyeriss chip is hardly the only player in this game: IBM is developing specific AI processors, Google’s Tensor Processing Unit, or TPU, is a special AI chip for neural networks (Google still uses both CPUs and GPUs for other AI applications), and Microsoft started working on its own AI processor for Azure in late 2018. The aforementioned chip company Graphcore has developed what it calls Intelligent Processor Units (IPUs). These rely on a wholly different architecture than GPUs and CPUs.

 

Who Will Win the AI Chips Arms Race?

There is no clear indication of which approach — building on GPUs, TPUs, IPUs, or any number of other architectures — will prove the most effective at processing AI code. Because of this, the market is currently fiercely competitive and highly fractured. Chips experts predict that by 2025 the new chip architecture will have solidified, and competition will switch directions to focus on cost per capabilities rather than approach.

Sign up for our newsletter, NewtonX Insights:

Your playbook to making confident business decisions enabled by B2B research. Expect market research trends, tools, and case studies with leading enterprises, delivered monthly.

Related Content

Aging with AI: How Some Companies Are Leveraging Technology for Elderly Populations

Japan's growing elderly population has created a pressing need for technological aid.

read more

HR Ex-Machina – Is AI The Solution To Human Resources Issues?

How human is HR? What can and cannot be automated, read on.

read more

The Future Of EdTech – When AI Becomes the Teacher What Happens to Kids?

When AI and robots become the teacher, who has the most to gain, the most at risk? Read on to learn the future of EdTech.

read more

What Happened to the Quantum Computing Hype? Here’s Where the Technology is Expected to Head

Quantum computing is often hyped, we take a look through the buzz for a closer look.

read more