In IT, we love redundancy and headroom: redundant power, redundant uplinks, redundant controllers, database replication, server clusters, more CPU and RAM resources, the works. For critical systems, we’ll take nearly all the redundancy we can get or the budget will allow.

Historically, we have no end of examples of why we needed at least two of everything. Power supplies and spinning disks failed quite often. Network hardware was more resilient in that it was solid state, but unexpected loss of links due to cabling issues or human error -- or even a blown power supply -- was a constant threat.

[ Your company's data and applications will grow, but you can manage infrastructure convergence without losing your grip. | Cut to the key news in technology trends and IT breakthroughs with the InfoWorld Daily newsletter, our summary of the top tech happenings. ]

Certainly, we still expect spinning disk failures and power supply failures, but in reality, these are fewer and further between than ever before. The longevity of power supplies has improved greatly over time, as has enterprise-grade spinning disk.

In fact, as a write this, I sit not far from a 10-year-old gigabit L3 core switch with the original power supplies and fans, and a storage array with 52,790 hours on each disk. Yes, that’s 2,199 days, or a little more than six years. (Out of concern that I may have jinxed myself, I’ve already started a replacement effort for that unit.)

A friend recently sent me an uptime on a Linux server showing 3,193 days. That’s 8.7 years without a reboot. Apparently that box had some disk faliures, but no power supply issues. Disks and power supplies will always fail eventually, but it’s nowhere near as common as it used to be.

Of course, we’re talking about enterprise-class gear here, for the most part. This is stuff that should last as long as possible. This is part of why it’s so expensive, right? The other part is that it outperforms everything else. Well, at least in the networking world, enterprise-grade and general-purpose hardware is becoming essentially identical, and the power available in these embedded platforms is taking over for tasks that were solidly in the server sphere. At massive scale, this isn’t as prevalent, but for midsize to large infrastructures, you might be surprised.

Take pfSense, for instance. For roughly $350 for an embedded single-board box and a free download of pfSense, you can easily spin up a gigabit firewall that would blow the doors off anything else in its price range. You can even buy a prebuilt appliance from the pfSense store, with support, for as little as $500.

Is that hardware enterprise-grade? No, it’s not, and it doesn’t have redundant power supplies, though it uses an external power brick, so you could buy two. However, what that box can do at a $350 price point is roughly what an entire rack of server hardware could do not very long ago. This is more than offering features; this is offering features at a level of performance that makes them usable in most infrastructures.

If we take a look at a current example of this type of hardware, we might find something like the Netgate RCC-VE 2440. This is a two-core Atom box with 4GB of RAM and 4GB of onboard eMMC flash. It has four gigabit Ethernet ports and mini PCIe slots for Wi-Fi or cellular data cards. The raw specs of this little box are in line with what we’d expect from enterprise-grade network hardware at a much lower price point. If we need more juice, we can upgrade to a four-core, six-port unit with 8GB of RAM for a few hundred dollars more. This is an embedded system that’s designed to serve as a router, firewall, load balancer, VoIP PBX, or a number of other roles.

The key is that embedded units like these can do all of those tasks simultaneously, for a surprisingly large number of clients. If we think about that unit as a remote-office firewall running pfSense, we find it can handle multiple WAN connections to different ISPs with failover/failback, support RIP, OSPF, and BGP routing, and provide all local DNS tasks thanks to BIND. It can terminate a large number of both LAN-to-LAN and client VPNs using IPSec, OpenVPN, PPTP, or all three. It can run all DHCP services for that office and act as a Squid proxy, a load balancer (not that we’d necessarily need one), and a content filter. It can serve as an Asterisk VoIP PBX. It can run a Web server and a print server, and if we add an mSATA SSD, it could even do limited file serving or expanded proxy caching. We could use it to provide Wi-Fi natively and to provide a captive portal for authentication/authorization for network access. We could even get two of them and configure them in a failover group.

That’s a huge list of functions, all made possible by open source tools, and all of it packaged very elegantly in pfSense. Of course, pfSense can run on most modern server hardware, and it’s used for these purposes all the time. The difference: Using an embedded system like this makes us nervous because the entirety of our office’s network functionality now rests on that one inexpensive little box -- or two, if you configure a failover pair. But frankly, the design and construction of boxes like this are only getting better and the failures becoming more infrequent.

We may have needed a rack’s worth of gear to perform all of those tasks a few years ago, but much as we scoff at how much it would be overkill today, we should also understand that these little boxes may be all we need right now and not only for small satellite offices. These days, a little goes an amazingly long way.