Computer chips are usually small. The processor that powers the latest iPhones and iPads is smaller than a fingernail; even the beefy devices used in cloud servers aren’t much bigger than a postage stamp. Then there’s this new chip from a startup called Cerebras: It’s bigger than an iPad all by itself.

The silicon monster is almost 22 centimeters—roughly 9 inches—on each side, making it likely the largest computer chip ever, and a monument to the tech industry’s hopes for artificial intelligence. Cerebras plans to offer it to tech companies trying to build smarter AI more quickly.

Eugenio Culurciello, a fellow at chipmaker Micron who has worked on chip designs for AI but was not involved in the project, calls the scale and ambition of Cerebras’ chip “crazy.” He also believes that it makes sense, because of the intense computing power demanded by large scale AI projects such as virtual assistants and self-driving cars. “It will be expensive, but some people will probably use it,” he says.

The current boom in all things AI is driven by a technology called deep learning. AI systems built on it are developed using a process called training, in which algorithms optimize themselves to a task by analyzing example data.

The training data might be medical scans annotated to mark tumors or a bot’s repeated attempts to win a videogame. Software made this way is generally more powerful when it has more data to learn from or the learning system itself is larger and more complex.

Keep Reading The latest on artificial intelligence , from machine learning to computer vision and more

Computing power has become a limiting factor for some of the most ambitious AI projects. A recent study on the energy consumption of deep-learning training found it could cost $350,000 to develop a single piece of language-processing software. The for-profit AI lab OpenAI has estimated that between 2012 and 2018, the amount of computing power expended on the largest published AI experiments doubled roughly every three and a half months.

AI experts yearning for more oomph typically use graphics processors, or GPUs. The deep-learning boom originated in the discovery that GPUs are well suited to the math underpinning the technique, a coincidence that has boosted the stock price of leading GPU supplier Nvidia eight-fold in the past five years. More recently, Google has developed its own AI chips customized to deep learning called TPUs, and a raft of startups have begun work on their own AI hardware.

To train deep-learning software on tasks like recognizing images, engineers use clusters of many GPUs wired together. To make a bot that took on the videogame Dota 2 last year, OpenAI tied up hundreds of GPUs for weeks.

Cerebras' chip, left, is many times the size of an Nvidia graphics processor, right, popular with AI researchers. Cerebras

Cerebras’ chip covers more than 56 times the area of Nvidia’s most powerful server GPU, claimed at launch in 2017 to be the most complex chip ever. Cerebras founder and CEO Andrew Feldman says the giant processor can do the work of a cluster of hundreds of GPUs, depending on the task at hand, while consuming much less energy and space.

Feldman says the chip will allow AI researchers—and the science of AI—to move faster. “You can ask more questions,” he says. “There are things we simply haven’t been able to try.”

Those claims are built in part on the Cerebras chip’s large stocks of onboard memory, allowing the training of more complex deep-learning software. Feldman says his oversized design also benefits from the fact that data can move around a chip around 1,000 times faster than it can between separate chips that are linked together.

Making such a large and powerful chip brings problems of its own. Most computers keep cool by blowing air around, but Cerebras had to design a system of water pipes that run close by the chip to prevent it from overheating.

LEARN MORE The WIRED Guide to Artificial Intelligence

Feldman says “a handful” of customers are trying the chip, including on drug design problems. He plans to sell complete servers built around the chip, rather than chips on their own, but declined to discuss price or availability.