All eyes may have been on Nvidia this year as its stock exploded higher thanks to an enormous amount of demand across all fronts: gaming, an increased interest in data centers, and its major potential applications in AI.

But while Nvidia’s stock price and that chart may have been one of the more eye-popping parts of 2017, a year when AI continued its march toward being omnipresent in technology, something a little more subtle was happening in the AI world that may have even deeper ramifications.

This year, an array of startups that are all working on their own variations of hardware that will power future devices built on top of AI received enormous amounts of funding. Some of these startups have nowhere near a massive install base (or have yet to ship a product) but already appear to have no trouble raising financing.

Looking to optimize inference and machine training — two key parts of processes like image and speech recognition — startups have sought to find ways to pick away at these processes in ways that will make them faster, more power-efficient, and generally better suited for the next generation of artificial intelligence-powered devices. Instead of the traditional computational architecture we’ve become accustomed to with CPUs, the GPU has become one of the go-to pieces of silicon for processing the rapid-fire calculations required for AI processes. And these startups think they can do that even better.

Before we get to the class of startups, let’s quickly review the aforementioned Nvidia chart, just to get a sense of the scale of what’s happening here. Even with the blip at the end of the year, shares of Nvidia are up nearly 80 percent heading into 2018:

So, naturally, we’d probably see a whole class of startups that are looking to pick away at Nvidia’s potential vulnerabilities in the AI market. Investors, too, would also take notice of this.

We first broke the news that Cerebras Systems had picked up funding from Benchmark Capital in December last year when it raised around $25 million. At the time, it seemed like the AI chip industry was not quite as obvious as it was today — though, as the year went on, Nvidia’s dominance of the GPU market was a clear indicator that this would be a booming space. Then Forbes reported in August this year that the company was valued at nearly $900 million. Obviously, there was something here.

Graphcore, too, made some noise this year. It announced a new $50 million financing round in November this year led by Sequoia Capital, shortly after a $30 million financing round in July led by Atomico. Graphcore still, like Cerebras Systems, doesn’t have a splashy product on the market yet like Nvidia. And yet this startup was able to raise $80 million in a year, though hardware startups face many more challenges than ones built on the back of software.

There’s also been a flurry of funding for Chinese AI startups: Alibaba poured financing into a startup called Cambricon Technology, which is reportedly valued at $1 billion; Intel Capital led a $100 million investment in Horizon Robotics; and a startup called ThinkForce raised $68 million earlier this month.

That’s to say nothing of Groq, a startup run by former Google engineers that raised around $10 million from Social+Capital, which seems small in the scope of some of the startups listed above. Mythic, yet another chip maker, has raised $9.3 million in financing.

So we can see not just one or two but seven startups gunning for similar areas of this space, many of which have raised tens of millions of dollars, with at least one startup’s valuation creeping near $900 million. Again, these are hardware startups, and it is next-generation hardware, which may require a lot more financing. But this is still a space that cannot be ignored at all.

Moving beyond the startups, the biggest companies in the world are also looking to create their own systems. Google announced its next-generation TPU in May earlier this year geared toward inference and machine training. Apple designed its own GPU for its next-generation iPhone. Both of these will go a long way toward trying to tune the hardware for their specific needs, such as Google Cloud applications or Siri. Intel also said in October it would ship its new Nervana Nueral Network Processor by the end of 2017. Intel bought Nervana for a reported $350 million in August last year.

All of these represent massive undertakings by both the startups and the larger companies, each looking for their own interpretation of a GPU. But unseating Nvidia, which has begun the process of locking in developers onto its platform (called Cuda), may be an even more difficult task. That’s going to be doubly true for startups that are trying to press their hardware into the wild and get developers on board.

When you talk to investors in Silicon Valley, you’ll still find some skepticism. Why, for example, would companies look to buy faster chips for their training when older cards in an Amazon server may be just as good for their training? And yet there is still an enormous amount of money flowing into this area. And it’s coming from firms that are the same ones that bet big on Uber (though there’s quite a bit of turbulence there) and WhatsApp.

Nvidia is still a clear leader in this area and will look to continue its dominance as devices like autonomous cars become more and more relevant. But as we go into 2018, we’ll likely start to get a better sense as to whether these startups actually have an opportunity to unseat Nvidia. There’s the tantalizing opportunity of creating faster, lower-power chips that can go into internet-of-things thingies and truly fulfill the promise of those devices with more efficient inference. And there’s the opportunity of making those servers faster and more power-efficient when they look to train models — like ones that tell your car what a squirrel looks like — may also turn out to be something truly massive.