Back in July, there was a lot of hype made out of the fact that retail sites had actually become slower over the past year—not a minor finding considering the numerous studies that have consistently shown that the slower the page or site the lower the sales, conversions and revenue. While the immediate reaction to these slow downs may have initially raised a few eyebrows, it’s really not all that surprising given the fact that today’s Web pages and sites are considerably heavier than in years past. Websites have matured into vast online ecosystems where numerous internal and third-party components work in unison to deliver a cohesive user experience—there’s simply more content to process and load on today’s sites.

The hoopla, however, was short lived—mainly because it was balanced out by the idea that more websites are starting to focus less attention on their total page load time and place more of an emphasis on a new performance metric making the rounds dubbed “Time to Interact (TTI)”—defined as the point at which a page displays its primary interactive content. This new metric is less about the actual time it takes for an entire page to load and more about measuring how long it takes for the page to deliver the experience the visitor is seeking. While speed plays a major role in visitors’ overall site experience—the majority of visitors don’t need to wait for the entire page to find and engage with the content they came for. This is why the most important aspects of a page are kept above the fold. However, TTI isn’t taking into account all above the fold content. Rather, by definition, it is measuring the time it takes for the page to display its “primary interactive content.” The problem with this revolves around the idea that as websites have become increasingly complex so too has the number of ways to engage with them—meaning that the primary interactive content on a site can vary significantly from one user to the next.

The study that originally pointed out this trend was focused on retail sites. And rightfully so, considering the impact a slow response time can have on e-commerce sites. However, I have to ask, what exactly is the primary interactive content on a retail site… or any other site for that matter? Can you really pin down any one part of a website that will continually remain the primary interactive content? It would also seem that the primary interactive content depends greatly on the reason why a user is visiting the site in the first place. Are you there to return an item or update your account or do you need to track an order? Maybe you’re just there to see what’s new or maybe you’ve finally decided to pull the trigger on that perfect pair of pumps you’ve had your eye on.

The point is, tagging any one section of a website, e-commerce or not, is risky when you’re talking about meaningfully measuring the customer experience. The experience a user has with a website greatly depends on the reason for the visit. And those reasons can vary drastically—not only from one user to the next, but also over the span of multiple visits by individual users.

You could make the argument that tagging multiple selections as the ‘primary interactive content’ addresses these concerns, but at what point can you say you’ve met the criteria for what should constitute this definition? If you’re a part of the operations, marketing or senior management team tasked with monitoring website performance, how do you determine what content should be included and what warrants exclusion? Does the super-box CTA, log-in link and navigation menu need to be included in the TTI measurement or just one or two of those elements? How do you justify what does and doesn’t make it in—and who makes that call?

OK, I’ll admit to playing a bit of a devil’s advocate here, but my point is that TTI can be somewhat subjective. And at a time when websites update their content as frequently as Taylor Swift updates significant others, how can you establish a meaningful baseline against which to measure future performance using TTI? This isn’t to say that TTI can’t be a useful measurement. However, if I’m going to bother measuring it, I should be taking the time to gain a deeper understanding of exactly what it is I’m measuring.

AlertSite offers a different take on measuring the user experience, but one that is much more consistent with regards to setting and managing to a performance baseline. AlertSite’s Visual User Experience (VUX) metric enables you to empirically measure how your page is performing in the eyes of your visitors by video processing the page as it renders. This enables you to measure the time it takes for the first pixels of a page to become visible to the human eye, providing consistent visibility into how long it takes before your users actually perceive the page starting to render. What’s more, there is no subjective determination required to designate what the page’s primary interactive content should be. VUX also provides the total load time for above the fold content, so you can measure how long it takes before your page is fully rendered from your visitor’s perspective.

The reality is that website performance, and the user experience derived from it, has less to do with how long your page actually takes to load and everything to do with how long your users perceive it takes to load. Gaining greater visibility into when that process starts and stops in the eyes of your users enables you to optimize your site to directly impact their conceptions of speed and performance. Unlike TTI, AlertSite’s VUX provides an accurate, consistent method of indexing the key moments in the page rendering process that contribute to how your users are experiencing the performance of your website.

While these two metrics differ, they are both clear attempts to measure what many organizations have already recognized as the last real online differentiator—customer experience. Deciding which is a more accurate measurement can be an admittedly difficult thing to quantify, but ignoring the impact that a poor user experience will have on your online audience is a mistake – regardless of how you decide to gauge it.

See Also:

[dfads params='groups=934&limit=1&orderby=random']

[dfads params='groups=937&limit=1&orderby=random']