This week, Intel is hosting a datacenter event in San Francisco. The basic message is that the datacenter should be much more flexible and that the datacenter should be software defined. So when a new software service is launched, storage, network and compute should all be adapted in a matter of minutes instead of weeks.

One example is networking. Configuring the network for a new service takes a lot of time and manual intervention: think of router access lists, gateway/firewall configurations and so on. It requires a lot of very specialized people: the Netfilter expert does not necessarily master the intricacies of Cisco's IOS. Even if you master all the skills it takes to administer a network, it still takes a lot of time to log in to all those different devices.

Intel wants the propietary network devices to be replaced by software, running on top of its Xeons. That should allow you to administer all your network devices from one centralized controller. And the same method should be applied to storage and the proprietary SANs.

If this "software defined datacenter" sounds very familiar to you, you have been paying attention to the professional IT market. That is also what VMWare, HP and even Cisco have been preaching. We all know that, at this point in time, it is nothing more than a holy grail, a mysterious and hard to reach goal. Intel and others have been showing a few pieces of the puzzle, but the puzzle is not complete at all. We will get into more detail in later articles.

But there were some interesting news tidbits we like to share with you.

First of all, the announcement of the new Broadwell SoC. Broadwell is the successor to Haswell, but Intel also decided to introduce a highly integrated SoC version. So we get the "brawny" Broadwell cores inside a SoC that integrates Network, storage etc. just like the Avoton SoC. As this might be a very powerful SoC for microservers, it will be interesting to see how much room is still left for the Denverton SoC - the successor of the atom based Avoton SoC - and the ARM server SoCs.

Jason Waxman, General Manager of the Cloud Infrastructure Group, also showed a real Avoton SoC package.

A quick recap: the Atom Avoton is the 22 nm successor of the dualcore Atom S1260 Centerton.

The Avoton SoC has up to 8 cores and integrates SATA, Gigabit Ethernet, USB and PCIe.

Intel promises up to 4x better performance per watt, but no details were given at the conference. The interesting details that we hardware enthusiasts love can be found at the end of the PDF though. Performance per Watt was measured with SPEC CPU INT rate 2006. The dualcore Atom S1260 (2 GHz, HT enabled) scored 18.7 (base) while the Atom C2xxx (clockspeed 1.5 GHz?, Turbo disabled) on an alpha motherboard (Intel Mohon) reached 69. Both platforms included a 250 GB harddisk and a small motherboard. The Atom "Avoton" had twice as much memory (16 vs 8 GB) but the whole platform needed 19 W while the S1260 platform needed 20W. Doubling the amount of memory is not unfair if you have four times as much cores (and thus SPEC CPU INT instances). So from these numbers it is clear that Intel's Avoton is a great step forward. The SPEC numbers tell us that Intel is able to get four times more cores in the same power envelop without (tangibly) lowering the single threaded performance (the lower clock speed is compensated by the IPC improvements in Silvermont).

Intel does not stop at integrating more features inside a SoC. Intel also wants to make the server and rack infrastructure more efficient. Today, several vendors have racks with shared cooling and power. Intel is currently working on servers with a rack fabric with optical interconnects. And in the future we might see processors with embedded RAM but without a memory controller, placed together inside a compute node and with a very fast interconnect to a large memory node. The idea is to have very flexible, centralized pools of compute, memory and storage.

The Avoton server at the conference was showing some of these server and rack based innovations. Not only did it have 30 small compute nodes....

... it also did not have any PSU, drawing power from a centralized PSU.

In summary, it looks like the components in the rack will be very different in the near future. Multi-node servers without PSUs, SANs replaced by storage pools and proprietary network gear by specialized x86 servers running networking software.