Back in the old days, say, the 1990s, deploying applications on local infrastructure was a pretty straight-forward, if not altogether easy, task.

Pretty much all resources were confined to the data center, save for the client device and networking. The chief decision was how to provision the appropriate server, storage and network hardware in order to provide adequate performance – which was usually a point of contention between users, IT, and your budget.

Finding the Right Infrastructure

These days, however, just finding the right infrastructure is a challenge. Should the app be hosted on bare metal or on a virtual layer? On-prem or in the cloud? Public, private or hybrid architecture? What about modular hardware vs. the standard server and storage farms? And increasingly, given the increasing dynamism of the data load, various combinations of all these solutions may be needed at a moment’s notice and then vanish just as quickly.

The good news is that with all of these options available, IT has the ability to tailor infrastructure to the needs of the app, like never before. The bad news is that it can be difficult to determine the precise relationship between applications and either virtual or physical infrastructure for optimal outcomes. From a technological perspective, each configuration has its pluses and minuses that can either enhance or diminish performance for a particular service. An ecommerce application, for example, may launch in an on-premises cluster, but if the site becomes hugely popular it will likely migrate to the cloud very quickly. A back-office business intelligence app, on the other hand, might remain in-house due to the sensitivity of its data, the sheer processing power needed, and the relative predictability of the workload.

For many key applications, private infrastructure is the top solution, says Andrew Froehlich, President and lead network architect for Colorado consulting firm West Gate Networks. Highly regulated workloads, such as those related to healthcare and finance, are best kept at home; as are those that require high levels of security, availability, and visibility. Latency is another factor, but this can cut both ways on the cloud. For example, customer-facing applications will perform better if processing is hosted in a nearby cloud; but office workers will notice a distinct lag if their local legacy app is suddenly migrated to a cloud that is miles away.

Click here to learn about placing critical workloads.

Choosing the Right Configuration for Applications

Still, picking and choosing the right configuration for an application has become a very fluid process in the age of virtualization. Many organizations are, in fact, finding it easier to implement deployment strategies in reverse. Rather than provisioning the latest and greatest technology first and then seeing what it can do, the process now begins by defining user requirements and then pulling the appropriate infrastructure from a variety of pooled resources. According to Cloudera Chief Architect Doug Cutting, the deployment process is greatly accelerated, and the results are far more accurate, when you know crucial factors like the location of data generation, workload types and characteristics, performance needs, and TCO requirements at the outset.

And while many enterprises deploy on internal infrastructure largely due to security concerns, the reality is that this is more an issue over control than security. In your own data center, you can verify – on a daily basis if you need to – that proper policies for security, resource allocation, availability, and a host of other factors are being maintained. A public cloud provider will usually offer a report that SLAs are being met; but they rarely allow fine-grained visibility into their infrastructure or practices. Depending on whether you opt for dedicated or shared resources, your performance may or may not also be affected by the provider’s other client loads.

Even with Big Data, the overwhelming impression is that only the public cloud can scale to the level of billions of data streams churning away 24/7. But with advances in modular technology, hyperconvergence and, edge computing; building a private Big Data infrastructure might not be the burden it appears to be. For starters, hyperconverged infrastructures pack enormous power into a very small footprint, so few organizations will require the warehouse-sized facilities of hyperscale providers. With the proper intelligence and analytics capabilities spread across geo-distributed micro-data centers; only a small fraction of the Big Data load will even be coming back to centralized resources anyway.

The beauty of virtual infrastructure, of course, is that deployment decisions no longer have to be all or nothing. As long as the enterprise manages both its public and private infrastructure in a cohesive fashion, there is no reason why workloads cannot be divided among and between a wide range of bare-metal and hosted solutions both at home and in the cloud.

And the best thing is, no matter what your needs, the resources to support them are available somewhere.