Have you ever tried a sites search and been underwhelmed with the accuracy of the results? Do you find yourself feeling frustrated and leaving when the search doesnt return what you’re looking for? Even worse – do you find yourself just assuming what you’re looking for must not exist on that site – only to find the item on that exact same site through other channels?

If so, you’ve just experienced bad search relevancy. It’s something we all experience daily – a frustration for users and lost opportunity for the sites attempting to serve us.

“One of those old-timey-push-mower thingies!?!” he said to the confused sales associate

I remember a time I was searching for a manual push mower online. You know the kind that you’ll probably associate with suburban America in the 1950s. Not gas powered, just the kind you manually push and the cylindrical blade accelerates to cut grass. I thought it’d be an environmentally friendly way to cut grass and get exercise.

Searching on several ecommerce sites with “manual push mower” and “old time lawn mower” and lots of other similar search queries did not surface one of these lawnmowers. I was very frustrated. Part of me began to assume that they no longer exist – despite the fact that I could have sworn I had seen someone cutting their grass with such a lawn mower last week!

Suffice it to say all those sites lost out on a sale as I eventually gave up and went to a home improvement store. I described what I was looking for to an sales associate. I said “you know one of those old-timey push mowers on leave-it-to-beaver”. Sure enough he pointed me in the correct direction to the right item. It turned out I didnt know the correct terminology. This lawnmower is known as a “Reel Mower”. Thank goodness for that sales associate! The associate had enough smarts to take what I was searching for and figure out what I had meant in a way none of the search engines were able to.

Search relevancy is the practice of turning a search engine into a helpful sales associate. In the same way the associate understood what I meant when looking for a lawn mower in the store, relevant search can do the same for an online store.

What this means is that bad search is bad service. Poor relevancy is the modern equivalent of a lazy sales associate that seems unwilling or unable to help. You’re likely to be frustrated and not return to that online store if the site cant or doesnt appear to want to help you.

Good search relevancy, on the other hand, keeps users on the site. They’re delighted by what comes up and they want to come back for more. If you care about keeping and retaining users and customers, little can be more important than important how your site interacts with users through site search. Dont disappoint with bad service. Delight with amazing, prompt, and relevant service.

The Art and Science of Relevancy

The trick to relevancy is that search engines, like Solr and Elasticsearch, are simply sophisticated text matching systems. They can tell you when the search word matches a word in the document but they aren’t nearly as smart or adaptable as a human sales associate. Once a match is determined a search engine can use statistics about the relative frequency of that word to give a search result a relevancy score.

Outside of this core “engine” a lot of search relevancy is about the development required to either jury-rig text to allow fuzzy matching or correctly boosting/weighting on the right factors. A developer working on search relevancy focusses on the following areas as the “first line of defense”:

Text Analysis: the act of “normalizing” text from both a search query and a search result to allow fuzzy matching. For example, one step known as stemming can turn many forms of the same word “shopped”, “shopping”, and “shopper” all to a more normal form – “shop” to allow all forms to match.

Query Time Weights and Boosts: Reweighting the importance of various fields based on search requirements. For example deciding a title field is more important than other fields.

Phrase/Position Matching: Requiring or boosting on the appearance of the entire query or parts of a query as a phrase or based on the position of the words

Outside of this initial “first line of defense” that satisfies a lot of use cases, you can quickly get into more advanced areas to get more out of your search. These include:

Tags and ontologies – understanding the query and the document text in terms of specific concepts instead of simply matching terms. Often considered a “concept” search.

Natural Language Processing – understanding the grammatical structure of text in the query and search result to allow deeper understanding and matchnig

Statistical Processes – understanding statistically the relationship between different words. For example creating code that can detect that “Spatula” and “Frying Eggs” have some level of association.

Click Tracking – Given enough logs of user behavior with search, post process user behavior to attempt to determine which result is statistically most likely to be the best result for a query.

Search Engine Plugins – Plugins that modify the built-in scoring or text analysis behavior to create a custom relevancy algorithm

Genetic Algorithms – Given enough data about good search quality, determine the correct values for various weights and boosts that produce the optimal results using a genetic/evolutionary process

That’s a lot of approaches! And it’s an intimidating landscape if you don’t know where/how to start. The good news is that you can get a decent solution for most results with the basic features that your search engine provides. But search relevancy is a constant effort. Google is still working on their search engine and doesn’t show signs of stopping anytime soon.

Creating a Search Relevancy Practice

It looks intimidating, but dont fret. Step one in working on your relevancy is to figure out a good process for sandboxing these ideas. There’s a vast menu of options of approaches to improving search, but how will you know whether one has made an improvement or is making things worse? How do you test search relevancy to make sure you’re not going backwards in quality?

We advocate an approach known as Test Driven Search Relevancy. Using a tool like our product, Quepid, that can evaluate important search queries against a list of known good/bad documents we can make statements about the progress of our search.

In fact, this kind of practice is more important than normal software automated testing. Why? Your search developers often don’t know good search. They need business stakeholders to help craft search goals and use cases. Correctness can’t easily be defined by the developers, it takes collaboration. It takes putting the search developer in the same room as your equivalent of the “sales associate” to create use cases & tests – to help figure out what a customer really means when they put “manual push mower thingy “ in the search box.

How will you capture this information from stakeholders? Can you put a dozen sales associates in a room to keep trying search and giving you feedback?

No you’ll need to centrally store that feedback in one place. In a way that whatever mundane or crazy idea you want to try to improve relevancy, you can instantly get the feedback of dozens of skilled sales associates without them in the room.

What form should this feedback take? Preferably this would be in the form of identifying which results should come back for which search queries based on the expert judgement of your equivelant to a sales associate – some kind of expert in the content you serve. In the industry these curated testing lists are known as “judgement lists”. They can help guide your search efforts by keeping the knowledge of your best associates at the fingertips of search developers.

Once this feedback is captured in a testing tool, you can apply the skill of your best sales associates to all your users’ important queries a hundred times a day and get a reliable score knowing whether search results meet up with expectations of all stakeholders.

Whether you build a tool for your own use or use our tool Quepid, this practice should be the central cycle to your search relevancy work. As new search use cases arise, as new problems are solved, armed with this evolving feedback, developers can keep answering the question about whether new solutions are causing old problems to crop up again. Many ideas sound terrific until they’re met with the brutal reality of testing. Perhaps the new gizmo actually makes search worse. You want to know that immediately, not when the new relevancy algorithm is pushed to production and users leave in droves!

So have fun with your search – try all the interesting approaches. But test early and often. Dont get stuck not understanding how or why things work, instead focus on improving quality step-by-step armed with solid tests that guide and reinforce vital use cases.

Stop Ignoring Relevancy

Its become more-and-more critical that search relevancy be a front-and-center concern. Users require good service. In the absence of a sales associate or librarian, all youve got to build a relationship with a client is search. Do you want to leave them in the lurch? Or turn their engagement into sales? In an age of google, smart search is the norm. Dont be the exception.

Of course if you need help with your search, Get in touch and well be glad to discuss. And check out Quepid as a test-driven relevancy sandbox!