One of the funniest tweets I have ever seen was brought to my attention by Vala Afshar…

How it feels to watch a user test your product for the first time. #UX pic.twitter.com/IwcqECciFN — Jonathan Shariat (@DesignUXUI) March 13, 2015

Seeing this little animation was one of those serendipitous moments, as I had that very day experienced something eerily similar.

I’ve written previously about how I’ve been toying around with the augmented reality app Aurasma. In A way with the fairies I described how I used this app to replicate Disney’s fairy trail in my local botanic garden.

Impressed with what the app can do, I turned my attention to using it in the workplace. I decided to start small by using it to promote a new online course that my team was launching. I took a screenshot of the main characters in the course’s scenario and provided 3-step instructions for the target audience outlining how to: (1) Install the app onto their mobile device; (2) Visit the relevant URL in their browser; and (3) Point their device at the picture. When they did so, the names of the characters would magically appear above their heads.

This wasn’t just a gimmick; it was a proof of concept. By starting small, I wanted to test it cheaply and fail quickly. And fail I did.

When I asked several of my tech-savvy colleagues to test it, every one of them reported back saying it didn’t work. Huh? It worked for me! So what could be the problem?

After much tinkering and re-testing in vain, I decided to ask a friend of mine to test it. Bang, it worked for her first go. As it turns out, my colleagues simply weren’t following the second instruction to go to the URL. In their excitement to scan the image, they did so immediately after installing the app – but of course without the link, the app had nothing to connect the image to my augmentation. So when I pointed out their skipping of Step 2 and they re-tried it, voila it worked.

Despite this rough start, another colleague of mine cottoned on to my trial and was keen to use the idea to jazz up a desk-drop he was creating. Upon scanning the trigger image, he wanted a video to play. Aurasma can indeed do this, but I was trepidatious because my experiment had failed with tech-savvy colleagues – let alone regular folks. But I decided to look on the bright side and consider this an opportunity to expand my sample size.

Learning from my mistakes, I re-worded the 3-step instructions to make them clearer, and this time I asked a colleague to test it in front of me. But again we ran into trouble. This fellow did follow Step 2, but when the URL opened the app, it immediately required him to scroll through a tutorial. Then it asked him to sign up. Argh… these steps were confusing… and I was oblivious to it because I had installed Aurasma ages ago and had long since done the tutorial and signed up.

But that wasn’t all. After I grandfathered my colleague to Step 3, he held out his smartphone and pointed it at the image like a lightsaber. WTF? He read the instruction to “point” his device literally.

Another lesson learned.

Steve Jobs famously obsessed over making his products insanely simple. Apple goodies don’t come with a user manual because they don’t need them.

My experience is certainly a testament to that philosophy.

Three steps were evidently too many for my target audience to handle. The first step appeared simple enough: millions of people go to the App Store or Google Play to install millions of apps. And indeed, no one in my test balked at that. (Although convincing IT to tolerate a 3rd-party app would have been my next challenge.)

Similarly, the third step was easy enough when re-worded to point your device’s camera at the image.

The second step was the logjam. Not only is it unintuitive to open your browser after you have just installed a new app, but dutifully following this instruction mires you into yet more complexity. Sure, there is an alternative: search for the specific channel within the Aurasma app and then follow it – but that too is problematic as the user has to click a tab to filter the channel-specific results, which is academic anyway if you don’t want the channel to be public.

I understand why Aurasma links images to augmentations via specific channels. Imagine how the public would augment certain corporate logos, for example; those corporations wouldn’t want anything derogatory propagated across the general Aurasmasphere. Yet they hold the rights over their IP, so I would’ve thought that cutting off Joe Public’s inappropriate augmentation would be a matter of sending a simple email request to the Aurasma folks. Not to mention it would be in the corporation’s best interest to augment its own logo.

Anyway, that’s all a bit over my head. All I know is that requiring the user to follow a particular channel complicates the UX.

So that has caused me to wind down my plans for augmented domination. I am still thinking of using Aurasma: we might use it in our corporate museum to bring our old photos and artefacts to life. But if we go down this road, I’ll recommend that we provide a loan device with everything already set-up on it and ready to go – like MONA does.

In the meantime, I’ll investigate other AR apps.

Tweet



Print

More







Email

Like this: Like Loading... Related

This entry was posted on 8 September 2015 at 20:24 and is filed under usability. You can subscribe via RSS 2.0 feed to this post's comments.

Tags: app, AR, augmented reality, DesignUXUI, How it feels to watch a user test your product for the first time, instructions, Jonathan Shariat, m-learning, mlearning, mobile learning, start small, testing, usability, user experience, UX, Vala Afshar

You can comment below , or link to this permanent URL from your own site.