Definition anyone….

U ser e X perience is a variable term. Widely used to explain that your definition of user experience and somebody else her definition just means something completely different. But hey if no one asks and everybody nods we are on the same page. Right? Nope.

EUC User experience from the user perspective is very simple, the end users need their workspace and applications look superduper awesome and respond in warp speed fashion while offering the highest hyperfull-veryhigh-K definition quality for graphics and audio…period.

From a technical perspective, there is no special user experience metric. Simply because there is no special number that works for every environment. There are several around and the key metrics change with specific requirements of your employees, application and the business it self.

Other way around, we know what will not work. For example, the old common go for a coffee after switching on the PC in the morning, only to find it still booting when coming back several minutes later, looking at a business application interface that reminds you of 8-bit NES game isn’t cutting it. Spiffy looking application response within the set 2ms, all green. Higher boooh red and end user revolution starts! Good bye productivity.

And to IT… the end users don’t care about CPU or networking bladiebla that influences this experience from a technical perspective. I’ll just try and add some virtual CPU’s for you…. Errmm I want my application stopping from being slow and lagging fix that now.

How to make up the metrics that are the IT view of User Experience is to figure out where possible bottlenecks can occur in device, application and service chains, and try to keep in control. And with client, desktops, applications and data in the private and/or public cloud, a lot more will influence this user experience. When we put unmanaged, managed, connected and non-connected in the mix it gets even more complicated with a lot more dependencies.

If you won’t think about or do not think that user experience is important in an EUC environment, stay at your current organisation if they are satisfied or quit your job because you will lose out fast.

And, where do we start?

Design and Assessment

The problem with, unfortunately still, a big chunk of organisations is that the IT Dept has its distance with the business (and the other way around). And therefore, does not have a view what technical bits and pieces make up business services, and what services are important for the business. In design and deployment an important phase is to gather what are those important thingies for the business. Well who you gonna call. Assessment time. Barbarian Assessment Beserk mode on. Gather a zillion ton of metric data and drown……..No no no no

First get the objectives, requirements, the outlines of the to be architecture, application, user and data layers and fill in what is known. And what is unknown. From the unknown distill what is important to fill in this phase. And aligning people, process and technology. Do questionnaires, observations and interviews.

What happens when asking for applications designs, operational procedures or baselines and it remains quiet. Get some tools out to try and fill in the blanks from the infrastructure and meet halfway with the business.

For IT try to get information out of your application landscape who uses what at what times and is the workspace a) graphic centric or b) the client server application is more oriented about IO while c) others are both. IO latency equals degradation in graphics and audio but a and b are influenced here.

Can we use tools here? Yes. Getting all available metrics out of your selected tool isn’t going to help you as you will drown in too much data. The why, what and where questions need to be known. You can also use your specific application management or assessment tools as those parts will get you some of the important information. You probably have them in for application monitoring and life cycle management, don’t you?

Just know what you are required to add to the equation in the validation phase for the missing figures (for example desktop logon times). And talk with the business with the information coming from the infrastructure.

There are some specific tools for desktop assessments: SysTrack Desktop Assessment and Liquidware Labs Stratusphere™ FIT for example. Or use for example AppDynamics, New Relic, ExtraHop, AppDNA but also Infrastructure Navigator or Network Insights can be used here.

What tool fits best depends a bit on where you are coming from and what kind of information is known and unknown.

And again, be careful not to drown in too much information.

Validation and synthetic load testing

Design is done. Calculations make sense and everything is bought and delivered to the datacenters. Engineers did an excellent job in implementing and configuring all those components so they work together like a charm. Applications are layered, isolated and presented to the environment. Time to turn on the load testing and get some numbers in.

Can we use hundreds of real users or do we use a tool to simulate what an user typically does throughout the day? And shall we go synthetic load testing all over the place?

No, use both.

Synthetic load tools out there, like VMware View Planner and LoginVSI, have great workloads standard in there, simulating task workers, office worker whoohooo. Do create workloads more suitable to your organisation, custom workloads can be added to these tools.

Want benchmarking? Don’t go looking for a VDIMark score, there is no single company in real life using the desktop template used in there. But without VDIMark you can use it to gather valuable data.

Another thing, these tools do not often take breaks. Just take a moment and look around the office. How often do you see people chatting on the phone, going for coffee or smoke breaks and have meetings?

Testing 300 virtual desktops with a think time of 2 seconds and running 5 iterations of the workload, these desktops will be hammered all the testing time. All your users working simultaneously for hours and hours, will…. never……..happen.

Get some users in a pilot and crank up the load, and see how your real pilot users react. Get in questionnaires, observations and interviews to get the opinions in and try to match them to the gathered data.

And next to that use the load tests to look for possible bottlenecks in the environment. Find where and why it breaks. Find a threshold before the breaking, and before impacting the users. Collect this information and metrics generated per user and desktops. Extend the collected data to figure out the correct amount of that use case the environment can handle.

Just don’t do it unplanned in a production environment or using components already in production. The load tests will be disruptive.

UX Monitoring Toolbox

At IT User experience is mostly measured of cumulative factors of how fast the resources can be executed and presented to the user in a hopefully fast response. This is often measured in execution or latency time of some kind. Resources are monitored in the traditional food groups, cpu, memory, storage and networking. Gaining insight into the user environment, and see which applications and processes are running and what users might be experiencing are important. Monitor graphics (video included), audio quality, display and session performance. Have an issue there and goodbye user.

It is very important to first think about and define what metrics matter for your organisations workforce, and what metrics can influence at what layer for the user experience. What kinds matter and do you want to see?

Again, getting all available metrics in your selected tool isn’t going to help you, you will drown in too much data. Dashboards with all kinds of metrics how the VDI is operating is a good to have when drilling down when there is an issue. But without issues it is mostly overhead information as there is probably now issue there. Show in a simple view or dashboard if everything you defined is okay or not. The operators can drill down when there is a problem.

What tools do IT professionals have, to get more metrics from the desktops and the supporting environment?

vROPS for Horizon (and XenApp/XenDesktop if you would like). Should be no surprise for Horizon Environments. vROPS for Horizon is getting better and better. Not quite there on the whole user experience monitoring but getting closer. Current 6.4 version will add insights outside the standard Horizon and thus for AppVolumes, Access Point sessions, Blast Extreme Display protocol and cloud pod architecture. From the marketing slides: find and troubleshoot problems across the larger user environment with in-guest metrics that analyse user and session-centric metrics. Including CPU/RAM/disk utilizations, logon times, PCoIP and ICA protocol performance, and application experience.

But we know you are kinda VMware centric. Something else out there? Glad you asked as the whole subject here also counts for Citrix or whatever solution. I agree there is more out there and possibly something that better fits your specific needs. And I see monitoring and working with insights something that comes along with a professionals’ toolbox. And like your toolshed, there is not just a single tool there. Some additional monitoring tools I have stumbled upon in EUC environments:

Uberagent, if you happen to have Splunk lying around.

LoginPI, Workload testing in production anyone?

ControlUP, for some fast realtime insights.

Liquidware Labs Stratusphere™ UX, quickly troubleshoot virtual infrastructures and validating infrastructure changes

GPUSizer getting some most welcome GPU insights in.

ProcessExplorer (well a lot from sysinternals).

SpyStudio application packaging troubleshooting.

And the list probably goes on and on for ever………

Have some good suggestion for your fellow EUC professionals to look at. I invite you to leave a comment and put in your suggestion…. Share knowledge.

And now- continual improvement or status quo?

And now we have some idea of how this thingie works. Sit back and never look beyond that green yellow red screen?

It is a continual process of analyzing insights on changing workforces, applications, data, workflows, processes and so on. And using the insights. Do some design changes, re-validate, change management, tweaking, updates, testing, rave and repeat.

Continual service improvement in a constantly changing environment. Makes it fun and interesting!

But are those phases necessary, they take up a lot of project time and budget. Can’t we just skip them? Yes, and buying a mixed oversized and undersized environment, having disgruntled non-productive users, waisting a lot of time looking into unclear issues, getting consultants in as it is also unclear where the issues are, buying last minute expensive troubleshooting tools. Yeah that does not cost even more time and budget.

Insights are important in an EUC environment. Without it you cannot design, operate and keep a happy user environment.

– Do enjoy your EUC environment!