Today, CloudPhysics is entering your storage environment to give you the same level of awesome insight they’ve given into your compute cluster! The storage analytics aim to provide data about two main struggles that every data center has: storage capacity and storage performance.

Storage Capacity

Isn’t it sickening how quickly you can chew through terabytes of storage by not keeping an close eye on things like VM sprawl? Here’s an excerpt from the official blog post:

There are many fast-and-easy paths to storage waste in a virtualized datacenter. But the path to storage reclamation is typically slow and complicated. Take VM sprawl for example: it takes just seconds to spin up a new VM, but figuring out if and when it needs to be deleted takes hours, if not weeks. You can more easily reclaim CPU and memory resources by powering off the VMs, but powered off VMs still take up disk space. Over time, you may forget about them. That’s just one example of space waste – there are many more. CloudPhysics is specifically addressing storage-induced capacity problems by providing unique, powerful insights into where and how your storage space is being consumed along with specific recommendations on how to reclaim the space. We’ve been working hard to develop the algorithms that solve this problem, while making all that complexity transparent to users. The screenshot below, which shows how we address the problem of unused VMs – is just a sample of what you’ll find in our storage analytics.

With the new storage analytics, it’s now stupid simple to find wasted space and take it back. Here’s a screen capture of one of the space reclamation dashboards:

Storage Performance

Another frustration: how much money do you waste buying spindles or upgrading flash cache to support your performance needs when what you should be doing is tuning your workload to perform better? You could potentially save boatloads of money if you could fix the problem instead of just throw spindles at it. The new Storage Analytics can help with that too!

There’s a strong relationship between storage waste and storage performance. Why? Many virtualization users simply overprovision the number of spindles, read/write cache, flash storage etc. to avoid the pain of troubleshooting storage performance issues. After all who wants to spend hours and hours of time combing through performance charts to understand correlations and do root cause analysis? Nobody does – but we have figured out how to leverage big data analytics to do it for you. For example, our fantastic new datastore contention analytics (see above) tells you when and where you are experiencing contention, and automatically identifies which VMs in your datastore are performance culprits (and which are victims). You can now solve performance issues literally in few clicks – which is a lot quicker and more efficient than overprovisioning your storage.

You can see from the screenshot below how quickly you can identify a problem and remediate it, instantly saving you pain. It won’t hurt your bottom line either!

I know I’d love to see this technology used in my data center to make my life easier. If you’d like to see it in yours too, you’re in luck! All it takes to give it a shot is to sign up for a free 30 day trial! Here’s the link to the sign-up page. If you give it a shot, let me know how it goes in the comments! I’d love to hear what you do with it, and let CloudPhysics know what else you’d like to see.